top of page

Search Results

142 results found with an empty search

  • Unwitting Wrongdoing: The Case of Moral Ignorance

    Author Name < Back Unwitting Wrongdoing: The Case of Moral Ignorance Madeline Monge Should we blame and praise people for actions which they are ignorant of performing or which they take to be morally neutral? There are two competing theories for the moral assessment of ignorant agents. Capacitarianism focuses on whether an agent could have to have done something to not be ignorant but instead acquire moral knowledge. Valuationism determines an ignorant agent’s blameworthiness by looking at their values. Someone is blameworthy if they act within their values and still commit the harmful act. My paper makes three points. First, I examine how thought experiments revolving around moral issues are either written in support of, or as counterexamples to, the two theories of moral responsibilities. The description of these thought experiments causes the reader to lean in favor of what the theorist is trying to argue. In other words, these thought experiments function as intuition pumps. Second, reflection on the thought experiments used in support of the two theories of moral responsibility reveals that these theories, rather than being rivals, are two sides of the same coin. In this paper, I presuppose ignorance is a lack of knowledge. Knowledge I take to be a composite state that consists at the very least of three necessary conditions: truth, belief, and justification. This view, which can be traced back to Plato’s Theaetetus , claims that what distinguishes knowledge from mere true belief and lucky guessing is that it is based on some form of justification, evidence, or supporting reasons. The truth condition of the justified-true-belief analysis of knowledge states that if you know that p, the p is true. The truth condition need not be known; it merely must be obtained. The belief condition claims that knowing that p implies believing that p. Finally, the justification condition demands that a known proposition is evidentially supported. he justification condition prevents lucky guesses from counting as knowledge when the guesser is sufficiently confident to believe their own guess. Given that ignorance is the lack of knowledge and given that knowledge has at least three necessary conditions, there are many different sources of ignorance: lack of belief, lack of truth, and lack of justification. There are numerous psychological factors that can give rise to each of these three conditions. Among these psychological factors are forgetting, cognitive biases, miseducation, or lack of exposure. I presuppose this ignorance to be lacking knowledge. There is not only one type of ignorance, rather, there are two main classes of such: factual ignorance, and moral ignorance (Rosen, 64). There are various sources of ignorance from where factual and moral ignorance arise. When someone does not know, forgets, is lacking exposure to, is miseducated, does not retain, or misunderstands a given fact that cannot be disputed under any circumstance, they can become either factually or morally ignorant. These sources can be relieved with conscious effort, or by external involvement (Rosen, 302). Ultimately it is up to the agent to recognize errors that result from their ignorance. A debate surrounding the exculpating factors of moral or factual ignorance is important to understand. It is generally thought that immoral actions can only be exculpated by factual ignorance, but not moral ignorance. Factual ignorance hinges around objects of descriptive facts. I will be using an example of slavery in ancient slave-times to illustrate this concept. Let’s suppose someone lives next door to someone who has slaves but also does not know they are living next door to slaves. This would be a situation of factual ignorance because the neighbor does not know the fact that there are slaves living next door (Rosen, 72). It could be because they are unobservant, or because the slaveholder does a good job of keeping the slaves quiet; there is also the chance that the neighbor doesn’t care, is distracted by their own life, or denies their worry of believing that there are slaves. The slaveholder hiding slaves is an objective/descriptive fact that cannot be disputed. Even if they deny it, the slaveholder would still have slaves, and the descriptive fact would not change. On the other hand, moral ignorance arises when someone is ignorant of a moral fact. Moral facts are normative, and they prescribe courses of actions that are true simpliciter (Rosen, 64) . If the neighbor to the slaveholder knows that they are living next door to slaves, but does not know the slaveholder is harming them, this wouldbe moral ignorance. It is morally impermissible for the slaveholder to have and harm slaves. The neighbor should know the slaveholder is acting immorally by keeping and harming the slaves. Moral ignorance does not stop at the fact that the neighbor does not know it is morally wrong to harm people, but they may also not know they should do something about the harm. This ignorance of harm can be defined as, not knowing that an action may cause pain (harm) when one should know it does so. They also ought to know that, without good reason, harming people should be avoided at all costs because it is morally impermissible (Biebel, 302). Should the neighbor be exculpated because of factual or moral ignorance? If the neighbor does not know that having and harming slaves is morally impermissible, this could not be factually exculpated. This is a case of moral ignorance. The neighbor would be morally exculpated for their ignorance in this scenario because they are unaware that having and harming slaves is impermissible by moral standards (Rosen, 66). There is no opportunity for the neighbor to be factually ignorant. What prompts this type of ignorance? Perhaps the neighbor does not care that the slaves are being harmed, is distracted by other events, or is afraid of the repercussions that will incur because of speaking out against the moral injustice. The most important aspect of moral ignorance is to remember that it is prescriptive, and not descriptive. The argument of moral ignorance and blame revolves around what should or shouldn’t be done because of lacking knowledge. This is largely in part to the distinction between factual and moral ignorance. Factual ignorance may sometimes exculpate an immoral action, but it is ultimately moral ignorance that will exculpate an individual (Sliwa, 6). I. Capacitarian and Valuationist Assessments of Moral Responsibility: There are now several theories that concern moral ignorance: volitionist, attributivist, capacitarian, valuational, parity, and pragmatic. While all differ from one another in how they attribute blame to cases of moral ignorance, capacitarian, parity/pragmatic, and volitionism share a disposition of blame that focuses on someone’s capacity of knowledge (Biebel). Valuationalism and attributivism respond to blameworthy actions as being dependent on the personal volition of the agent. I’d like to classify these two categories as capacitarian and valuational. I will occasionally refer to specific points that individual theories make, but with the example of the slaveholder, I will continue the conversation with the two main theories. The capacitarian theories revolve around the counterfactual capacity that an individual has when deciding which action to take in a morally relevant situation that could’ve been prevented. They look at situations where someone is blameworthy. They want to know if it was in the agent’s capacity to correct or avoid being ignorant, and if this would have prevented them from performing the immoral action. Capacitarians consider people responsible for their actions if they are responsible for their capacity of behavior. People who lack the capacity for knowing what is morally permissible, say children, or people who are mentally incapable of retaining information relevant to moral standards, are not culpable for their immoral actions. They can be corrected, and may learn afterwards, but they are not blameworthy. They lack the ability to retain vital moral considerations. Capacitarians do not skip over the fact that people’s ignorance may be the reason they are acting immorally. If someone believes from their ignorance that what they did was the most rational and correct method of handling a situation of moral relevance, then they may be exculpated. However, this justification is only one part of the knowledge needed to have an accurate and knowledgeable conclusion. How a morally significant situation should be handled depends on someone’s capacity to know whether they had the opportunity to do something differently. This difference in choice may have changed someone’s ignorance into knowledge and prevented the immoral action. When someone is not aware why they are ignorant, they are also unaware of how they can resolve their lack of knowledge. This is the way capacitarian’s view moral ignorance to be exculpable, and encapsulates much of capacitarians’ concern. How can someone be blamed for not knowing a moral standard if they have never been socially conditioned or taught what the moral standard is? When I go over the varying vignettes that hone in on how the capacitarian theory can be utilized I will be able to further demonstrate the degrees of internal and external factors that influence moral ignorance, conveying how someone might come into the position to remedy their ignorance but lack the awareness or determination to do so. Arguing against the capacitarian theory is the valuationist theory. Valuationism responds to capacitarianism with a specific criticism. Capacitariansim uses immoral ignoramus as a clear reason to excuse someone from an immoral action, but valuationists believe that the capacitarian theory is too easily applied to every case of immorality. They do not think it is wise to exculpate someone who has forgotten or is unknowledgeable about morality. Valuationism approaches the topic of blame and exculpation surrounding immoral actions by looking at omission and forgetfulness. The theory considers omission and forgetfulness to lead to potentially harmful instances of ignorance. Harmful ignorance is when someone consistently shows blameworthy immoral actions. Valuationists trace the value systems and the past actions of agents to see what led them astray towards immoral actions. They look at recidivism rates, as well as values and virtues. Valuationalism investigates how people are held accountable for their actions and believe someone is only deserving of moral praise if they have reason to act morally. Moral responsibility is the condition of whether someone is praiseworthy, blameworthy, or excused from the former two because of their involvement in a moral act. Someone could also fail to act or omit an action. This is potentially why someone deserves a moral reward or punishment. Valuationists agree that psychological states may affect someone’s behavior to act accordingly during a moral situation. They see this as one component in the person’s link to act or neglect to act. Therefore, valuationists think it can only serve as a partial excuse for someone and is not a strong enough argument to exculpate them from a morally relevant situation. Psychological states in a valuationist framework does not make someone incapable of moral knowledge, nor does someone’s emotional attachment serve as a reason for someone to act immorally. Whether someone cares about an action does not render them more or less blameworthy. It may affect how much or little they will react, but it should not affect their moral assessment. Therefore, valuationalist’s believe that most people are, more often than not, blameworthy for their moral ignorance. If they have not responded in a morally kind manner to a situation, it’s because their values align with preconceived notions of their background. These preconceived notions are often the fundamental reasons for why someone acts immorally. Capacitarians avoid looking at an agent’s value system because they want to know if the immoral act could have been avoided, and if the agent could have prevented themselves from being ignorant in the first place. When we look at somebody's capacity to act, we are tracing their past actions and whether or not they had the ability to change their moral knowledge. Capacitarians rely on a history of someone’s actions. The values that arise from somebody's capacity to act are decided through the person's past actions from the moment they are born. Capacitarians look at past actions carefully because the culmination of them sets up the targeted subject that a valuationist uses to counter their argument. Values are deeply seated through someone’s past actions. The more they are reinforced through choice of action and external influences, the more established they become. The deeply seated beliefs that someone has grown into values are important for evaluating the response a person has to a morally loaded situation. We saw examples of this in the altered versions of the vignettes. Without the added context, a reader wouldn’t have been able to tell what the characters valued, nor what their guiding principles were. When we manifest actions as guiding principles, we are acting from a result of our values. These have been established by our capacities to act in the past. The values we are focusing on in this paper are intrinsic. For example, valuing education leads to being more productive in helping your children with their schoolwork and helping them improve when they need it. Valuing health means you likely eat a balanced diet and exercise regularly. These specific examples of intrinsic values provide a foundation for readers to rest on when making their own evaluative judgments. These intrinsic values lead to other good things, like, your children getting into a good school, and you living a life with bountiful opportunities because of your health. The Valuationist Theory focuses on such intrinsic values, and are meant for the valuationist to rationally conclude whether the characters in the vignettes are blameworthy or not. Values directly shape what people do and say. Their actions are subsets of behavior, and their behavior is a combination of capacities for potential action and values. Action is intentional behavior. Guiding principles of values will manifest as actions. The way we act is a subset of our values and that action is intentional. Each subset, whether planned or an unconscious reaction, is a value in disguise.Our actions are mostly intentional and based on our values, but sometimes they can be accidents due to forgetting. They may also be from a lack of capacity to change behaviors in the past and potentially due to a lack of values. II. Perspectives on the Assessment of Moral Responsibility with Respect to Capacitarian and Valuationist Approaches: In this next section, I will review various vignettes that scholars have introduced into the conversation of moral ignorance, discussing how our theory of moral responsibility will change depending on how the stories are described. I will be using a vignette from Alexander A. Guerrero’s 2007 article, “Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution”, which discusses the moral ramifications of poisoning someone with cyanide. I will also incorporate a recent, original vignette about the moral culpability of leaving a dog in a hot car. Both cases convey how the same set of events may be narrated in a way that supports the C or the V theory. . The support from these different theories is not derived from the event themselves but in how their contexts are described. Omitting and highlighting certain features will change which theory best explains whether someone should be blamed or praised. It is impossible to give a complete account of these theories in these vignettes, but we will be careful in fully describing each theory and embellishing. This will show which theory best explains each vignette. Both what could have happened and what is described will show whether one is morally blameworthy in the capacitarian sense. If a vignette lends itself to the capacitarian theory, it will focus on possible actions that could have changed depending on the capacity of the protagonist’s acknowledgement to do something differently. If the vignette falls towards the valuationist perspective, it is because of the protagonist’s present character traits and values. A. Case One: Guerrero’s Poison Let’s consider the case of Anne, who poisons Bill by spooning cyanide into his coffee. Anne believes she is spooning sugar, and she is blameless for her false belief. Is Anne blameless for poisoning Bill? Rosen concludes that an action done from ignorance is not a locus of original responsibility. This means Anne is only responsible for poisoning Bill if she is responsible for her ignorance about the fact that she is poisoning Bill. Guerrero has constructed a vignette that partially supports a theory where ignorance can be morally exculpated. What happens when details of the character’s capacities and values are introduced? I’m going to reintroduce Guerrero’s story with these details added to demonstrate the effectiveness of manipulating the story so the capacitarian or the valuationist theory provides a better explanation and justification of our natural inclination to blame the protagonist. B. Case Two: Guerrero’s Poison (modified) Let’s consider again the case of Anne, a single mother who is Bill’s girlfriend. Bill regularly comes over in the morning to share a cup of coffee because he has been dating Anne for a few months. After a long night of helping her children prepare for an important exam, Anne believes she is spooning sugar into Bill’s morning coffee and is unaware that she is poisoning him with cyanide. Anne does not know that last night after she went to bed exhausted from tutoring her children, she had a sleepwalking incident where she mistakenly poured out the sugar in the sugar dish and replaced it with cyanide. Afterwards, Anne went back to bed and did not remember what she did in the middle of the night. That morning while Anne was spooning poison into Bill’s coffee, he innocently read the morning news on his phone and did not give the sugar a second thought. Was it in Anne’s capacity to make sure she was spooning sugar and not cyanide into Bill’s coffee? If Anne does not regularly sleepwalk then we cannot expect it to be within her capacity to know that she ought to check the sugar dish just in case she had tampered with it the previous night. What about Anne’s values? We know that Anne values relationships and caring for others, as well as education. This is why she stayed up to help her children prepare for an exam, and also why she regularly invites her boyfriend over for coffee. Here Anne is not blameworthy for her ignorance, nor has she acted within a set of immoral values that would prompt her to poison Bill. This has never happened before to Anne. Anne has never sleepwalked a day in her life and has a consistent record of showing Bill hospitality and care. Under a valuationist’s account of moral blame, Anne would not be considered blameworthy because her actions do not align with her values, and after the incident, she continued to grieve and disapprove of her ignorance. She did not intend to cause suffering, nor does she value suffering. Anne unfortunately is the cause of Bill's death because she had a momentary lapse in her sleep routine which caused her to act involuntarily on account of ignorance. In this case, Anne would not be blameworthy by capacitarian standards, nor by valuationist standards. Anne is not originally responsible for poisoning Bill, and she would be considered morally exculpated. Based on what the story tells us about Anne’s character traits and values, one can see that she did not act with malicious intent. It was an honest mistake, and a serious accident. Even though Anne has never sleepwalked before, would it be reasonable to expect her to check her sugar before she gives it to Bill? I think it would be considered unreasonable for anyone to expect Anne to check her sugar because Anne does not have a past history of swapping out her sugar with other substances. If it were the case that Anne has sleepwalked before, and she has a past history of replacing her sugar with other substances, like salt, powder bleach, or baby powder, then it would be reasonable to expect her to check. If Anne had a history of swapping substances, then her negligence to check on the sugar dish would be an involuntary act in ignorance. In this vignette, how a capacitarian and a valuationist consider someone to be morally blameworthy or exculpated is revealed through the protagonist’s capacity and character traits. This example shows us that the capacity of memory to prevent a potentially harmfully ignorant situation is a mitigating factor in someone’s judgment of immoral behavior. Anne did not willfully act immorally and is not blameworthy for her involuntary action done out of ignorance (Alvarez & Littlejohn, 8). Both theories attribute a small degree of responsibility to the harm Anne has done, but not enough to judge her as being willfully ignorant nor morally culpable. Capacitarian and valuationist theories agree with each other in how they assess this vignette due to Anne’s isolated incident. Let us take another vignette to compare capacitarian and valuationist theories. In this next scenario we have the unfortunate event of a dog dying after being left in a hot car unattended for some time. C. Case Three: Hot Dog Imagine Mrs. Crawford is out running errands with her medium sized cocker-spaniel in the back seat. The dog is in good health, well-groomed and fed, and Mrs. Crawford sees to it that he is well taken care of. Today of all days Mrs. Crawford pulls into a parking lot with no shade to block out the sun from her car. There is no breeze, and it is ridiculously hot outside. Instead of bringing her dog into the store with her, Mrs. Crawford decides to leave her dog in the car with the windows rolled up. She reasons that the air-conditioner was on during the drive to the store, so the car is not muggy or hot. She also reasons that she will not be in the store for a long time because she has a list of things she wants to purchase. At this point in her decision, Mrs. Crawford locks the car and leaves for the store. Suppose Mrs. Crawford is making good time in the store. She is almost done picking out everything on her list and is careful not to get sidetracked. However, Mrs. Bailey sees Mrs. Crawford in the aisle over and makes her way to talk to her about some important matters. Mrs. Crawford is delighted to see and talk to Mrs. Bailey, and easily becomes swept up in her conversation. She remembers her dog is in the car but does not remember how hot it is outside because the store is well air-conditioned, aiding to Mrs. Crawford’s choice to talk to Mrs. Bailey for longer than expected. Now the dog is still outside in the hot car, and because it is not properly ventilated or shaded, the car quickly becomes extremely hot inside. The dog is soon unable to withstand the heat and becomes sick and passes out in the back seat before Mrs. Crawford returns from the store. Mrs. Crawford is mortified. She had no idea that leaving her dog unattended for as long as she did would result in its sickness. She quickly takes her dog to the vet. Here we have a vignette that sets up Mrs. Crawford to be morally exculpated by her ignorance if we are not considering her values or capacity to have made changes in favor of the dog’s life. We are now going to see another representation of this vignette with both capacity and values of Mrs. Crawford included. Within this next vignette, I will provide more background information that will show how someone's capacity can prevent ignorance from occurring or may cause someone's ignorance to flourish. I will also be including Mrs. Crawford's values, which will show whether-or-not by the valuation as to perspective that Mrs. Crawford is in fact acting in line with her values. D. Case Four: Hot Dog (modified) Imagine Mrs. Crawford is a steady workaholic. Mrs. Crawford decides to skip her dog’s walk and bring them to the store with her. She is alert, and well aware that bringing her dog with her might be a hinderance, but she does it anyway. Today of all days Mrs. Crawford pulls into a parking lot with no shade to block out the sun from her car. There is no breeze, and it is ridiculously hot outside. Instead of bringing her dog into the store with her, Mrs. Crawford decides to leave her dog in the car with the windows rolled up. She thinks she is doing the right thing by leaving her dog behind in the car and reasons that the air-conditioner was on during the drive to the store, so the car is not muggy or hot. At this point in her decision Mrs. Crawford locks the car and leaves for the store, confident that her decision was the right one. Suppose Mrs. Crawford is making good time in the store. She is almost done picking out everything on her list and is careful not to get sidetracked. However, Mrs. Bailey sees Mrs. Crawford in the aisle over and makes her way to talk to her about some important matters. Mrs. Crawford suddenly forgets about her need to complete her shopping trip in a timely manner. She forgets her dog is in the car, nor does she remember how hot it is outside because the store is well air-conditioned. Now Mrs. Crawford’s dog is still outside in the hot car, and because it is not properly ventilated or shaded the car quickly becomes extremely hot inside. The dog is soon unable to withstand the heat and becomes sick and passes out in the back seat before Mrs. Crawford returns from the store. When she returns, Mrs. Crawford is mortified. She had no idea that she had been talking to Mrs. Bailey for so long. She did not even think about her dog, or the possibility that leaving her dog unattended for as long as she did would result in its death. She quickly takes her dog to the vet. What can we understand about this scenario that is different from the original? With this new perspective, we can see that Mrs. Crawford was completely forgetful in the care of her dog. While she is a workaholic with a one-track mindset, her decision to bring her dog along seems out of the ordinary and not in line with her normal character traits. We can tell by this story that Mrs. Crawford values social relationships, which is why she stopped to talk to Mrs. Bailey, independence, which is why she went out to the store in the first place, and the well-being of others, hence her decision to leave her dog in the car. Did Mrs. Crawford have the capacity to change her course and make sure she took measures that would secure the safety of her dog? I believe so. She was not tired; she was not overcome with thoughts of work that would normally cause her to forget other obligations. She was distracted, but by something that she had the capacity to say no to. Here I would like to point out that Mrs. Crawford was in her right mind and within the right capacity to know that talking to Mrs. Bailey would disrupt her schedule of running errands. This change of schedule had the potential to possibly upset or cause extreme distress to her dog that she left in her car. Mrs. Crawford ought to have known that the dog in the car was the most precedent of her concerns. She knows that by moral standards her dog has moral worth and is a moral responsibility that she has tasked herself with. Mrs. Crawford is someone that knows the difference between morality and immorality, and she is fully aware that her dog has a right to life. By placing her own dog within harm's way, Mrs. Crawford showed not only ignorance of fact but moral ignorance as well. Since she did not know that she was possibly harming her dog by talking to Mrs. Bailey and staying within the store for an extra length of time. Mrs. Crawford would be considered morally blameworthy. She knew that her dog was in the car. Even though she may not have known that by leaving them in the car she was potentially endangering her animal, this shows moral ignorance because she did not consider her dog’s life to be worthy enough to take extended measures that would have ensured survival. From the capacitarian theory she is considered blameworthy, but considered innocent from the valuationist perspective. III. Capacitarianism and Valuationism are Two-Sides of the Same Coin: Before we start to cut deeper into each of the theories independently, I would like to point out that these vignettes show us how different theories about moral ignorance are more accurate attributions of blame, depending on how the story is told. The way an author prescribes a vignette will directly affect the way a reader chooses to apply a theory. The author’s choice to write objectively or subjectively will also affect whether a reader will approach the ignorant action with a mind of blame or exculpation. This mode of thinking is something we see in moral realism. There are two positions in moral realism that we might be able to categorize the capacitarian and valuationsist theories under. First, normative realism posits that ethical sentences describe positions that are grounded in objective features. Some of the objective features may only be true in that they report the descriptions accurately, such as “killing someone is bad”. These descriptions do not contain subjective opinions, which aids in their accuracy and helps to establish moral truths. Second, the version of metaethical realism that can be used to look at these theories states that, in principle, it is possible to know about the facts of actions that are right and wrong, and about which things are good and bad (Copp, 2007). This position depends on the subjective opinions of others to determine these aforementioned facts. Metaethical realism takes a more common-sense approach to ask questions like “should we reasonably expect someone to check the sugar dish before serving sugar?” The reason why we need to keep moral realism in mind while assessing capacitarianism and valuationism is because it directly affects our assessment of them. We can see that assessments about moral responsibility are sensitive to additions and omissions of information regarding capacities and values of the agents. With the incorporation of certain details about an agent’s past actions and value systems, a reader can be swayed to agree or disagree with certain theories of moral assessment. Certain details require someone to be objective or subjective in their interpretation of the events (Baumann, 2019). This can greatly affect how a story is understood by various readers. However the story is told, whether narrow or elaborate, the rationale behind omitting and adding detail will always have a direct effect on the reader’s intuition of the story. Depending on how the vignette is written, the reader can be manipulated to believe that certain events will result in one theory being more conclusive than another. What this shows us is that the philosophers who wrote the vignettes wrote them in a way to prove the point of their own theories. These vignettes function as intuition pumps. Anything the philosopher wants to say activates a reader’s intuitive approach to assessing a situation. While the capacitarian and valuationist theorists may focus on different characteristics of someone’s motivation, their approaches to assessments of moral responsibility are similar. Both look at the contexts in which the act was performed; however, they differ in which part of the context they think to be relevant in their assessments. Capacitarians consider the most relevant point of context of behavior and compare it to be the behavior leading up to the harmful act. The capacity of the agent is also dependent on their knowledge of their wrongdoing. Capacitarians ask whether or not agents could’ve done something differently in the past to prevent their immoral act from taking place. If they engage in a harmful immoral act, then it is a result of their ignorance. Whether to attribute blame to the agent who acted out of ignorance would depend on their capacity to know that there was some way they could have prevented themselves from doing so. If they did not have the capacity to know they were acting immorally, or that they could’ve prevented themselves from acting as such, then they would not be considered blameworthy. Thus, an agent acting out of ignorance without the capacity to know they are doing so would be morally exculpated. Valuationists choose not to look at the behaviors preceding the events and instead examine the value system of the agent. They do this because they think the value system of a person should be considered the relevant context of the moral assessment of an act (Arpaly, 2004). The community of moral theorists has situated these two theories in contexts of past actions or value systems. Up until this point, we have discussed these two theories independently, however, I would like to show how they are closely related. If a vignette focuses on the capacity or the value system of a person, then readers will be persuaded to agree with the theory that provides a better explanation of moral judgements concerning actions. For instance, the more detailed the information regarding the context of the agent, the easier it will be for us to apply a theory that best suits the framework. The information needs to highlight either the agent’s value system or the agent’s past actions. If the information in the vignette does not include any context for the reader, then it is natural for them to assume and fill it in themselves. The various assumptions that arise from different readers’ perspectives have the ability to lead to a deep disagreement about the moral assessments of actions. An under-described thought experiment gives you inconclusive information to fill in gaps that a narrow story leaves out. Without enough information, a reader must add their own information. When a reader substitutes the information missing in the vignette, it can pull people into a deep disagreement about the moral assessment of the agent. This makes it easy for a reader to feed into their own thoughts. A reader is then foolish for reading into the story what they hope to get out of it. This creates circular reasoning on the reader’s part. In all cases, different people will have different assumptions while reading the under-described thought experiment, which will inevitably lead to problems applying certain theories to each one. Unfortunately, there is no way to halt varying interpretations because it is unreasonable to expect anyone to be able to provide every possible angle that a situation can have. In other words, there is no way for the author to close the room for interpretation entirely. If a deep disagreement arises, then this must be a result of an author’s manipulation of the vignette. For a deep disagreement to form, the vignettes would need to have an unclear description of an agent’s past actions and capacity or an unclear description of their value system because this would pin the capacitarian and the valuationist standpoints against each other. When the contexts of the past actions and value systems are clear and detailed in a vignette, it is unlikely that a deep disagreement will occur. Rather than finding a clash of theories, the verdicts would be expected to converge due to their connection. Throughout this paper I have been providing a route to view the literature of moral assessment to show how the valuationist and the capacitarian approaches are in competition with each other. However, I think this view wrongly pins the two theories against each other. The values that a person has will manifest itself in their actions, likewise, their actions are guided by their values, whether consciously or unconsciously. When we lay out this connection, we can see how someone’s past actions and value system are actually connected. With that said, I think it would be in our best interest not to play the two against each other, and instead show they are dependent on one another. This holistic/detached perspective demonstrates how these two theories are two sides of the same coin. IV. Conclusion: The more a vignette spells out a history, the more we get a sense of the value system of the person involved. Any value system shapes how people perceive information and influences their decisions. This means it also influences their intuitions and builds peoples’ overall foundations for actions. How a person has acted up to the point of the scenario usually tells us the story of the person's value system. Here we get a better sense of how they would act in future situations based on how they have acted prior. If a vignette is written in detail, spelling out a person’s capacities, values, or both, then the competing theories proposed by valuationists and capacitarians will likely converge. However, if it's sparse with little to no information, then the two rival theories may clash. They will seemingly work against each other because the readers are left to fill in the details. Without an established history or value system described, readers do not have anything prescribing their thoughts. Clashing is due to the under-description of the vignette and not used to interpret theories. I think this is where a lot of the deep disagreement stems. In this conversation about the moral assessment of blame, we have two theories that are seemingly different but work in tandem. They have a great opportunity to change the way that we, as philosophers, attribute blame, especially since wishful thinking does not give moral valence. If readers can speculate the history and the potential for a person based on their capacity to potentially act out their value systems, then they will not need to speculate on what the author meant. After all, it is not the job of a reader to fill in the blanks, it is up to the author/philosopher to explain a thought experiment in full to establish their theory (Baumann, 2019). Any description that influences a perspective is an important factor, but we need to decide whether someone is or is not morally culpable in a particular situation. To do this, it is necessary to know all the past relevant information. Swapping things around, omitting necessary information, and changing the context to fit someone else’s narrative of events is not an effective way to correctively assess the morality of an agent, nor is it conducive to figuring out whether they are morally exculpable. Withholding information is one way to prevent knowledge, and if we are concerned with knowing whether someone has performed an immoral action, then the truth is of utmost importance (Baumann, 2019). This is the way things become known. When looking back at the argument between the valuationist and capacitarian, knowledge of the subject’s past is necessary for determining if someone should be considered morally blameworthy. For determining both a person’s capacities and values in the present, it is vital to investigate their past. A person’s past determines their values just as much as it determines their capacities. A person’s past values can be written off due to their present capacities; likewise, a person’s past capacities can be written off because of their present values. The present moment is a culmination of all the previous values an agent has upheld. Valuationists point out how a person’s values are a result of what they did or didn’t do in the past. These values are determined on the agent’s capacity to understand and act on those values. Similarly, capacitarians see capacities as manifestations of value systems. The key to finding out someone’s capacities and values is buried in their past. What is the difference between these two theories if they both require knowledge of the person’s past behavior? Are they distinct theories that have similar foundations, or are they two sides to the same theoretical coin? Since both theories require the past to determine their present conditions, it’s possible that proposing these two ideas as distinctly different theories does not hold up to scrutiny. This is because values are conditions that people think should be upheld and reinforced, while capacities are behaviors of what people are capable of doing. Values are conditions that people strive for, give people numerous filters for actions, and are considered valuable in the social world. Once someone has a set of values, their subsequent actions are determined. When capacitarians look at capacities of individuals, they are looking at what actions would have been expected to perform given their capabilities. These actions are expected to be performed because of individual values. This is where we see the two theories speaking a similar language. If we need to know as much information about an individual’s past to form a coherent judgment of blame, then it’s possible these two theories are derived from the same theoretical foundation grounded in the past. The past is important to these two theories as a person’s past actions are suggestive of their values, and the person’s past values are suggestive of what actions someone can do based on their capacities. At this point, to look further into this topic I think it is indispensable to ask, how do we know what someone’s past values or capacities are, and how can we tell if they have led to present conditions? References Aristotle. 2011. Nicomachean Ethics. Chicago: University of Chicago Press. Arpaly, N. 2004. Unprincipled Virtue: An Inquiry Into Moral Agency. Oxford: Oxford University Press. Baumann, M. 2019. "Consequentializing and Underdetermination" Australasian Journal of Philosophy , 511-527. Bernecker, S. 2011. The Epistemology of Fake News. Oxford: Oxford University Press. Biebel, N. 2017, October 12. Epistemic justification and the ignorance excuse. Retrieved from Springer Link: https://link.springer.com/article/10.1007%2Fs11098-017-0992-4 Copp, D. 2007. "Introduction: Metaethics and Normative Ethics" The Oxford Handbook of Ethical Theory . Harmen, E. 2011. "Does Moral Ignorance Exculpate?" Ratio, XXIV , 443-468. M. Alvarez, &. C. 2017. "When Ignorance is No Excuse" Responsibility: The Epistemic Condition , 1-24. Rosen, G. 2003. "Culpability and Ignorance" Proceedings of the Aristotelian Society, 103 , 61-84. Rosen, G. 2004. "Skepticism About Moral Responsibility" Philosophical Perspectives, 18 , 295-313. Sliwa, P. 2020. "Excuse without Exculpation: The Case of Moral Ignorance" Oxford Studies in Metaethics , 72-95.

  • Calder McHugh | BrownJPPE

    Two Forms of Environmental-Political Imagination: Germany, the United States, and the Clean Energy Transition All Power to the Imagination Radical Student Groups and Coalition Building in France During May 1968 and the United States during the Vietnam War Calder McHugh Bowdoin College Author Alexis Biegen Sophia Carter Editors Fall 2019 Download full text PDF (26 pages) Abstract Student-led social movements in May of 1968 in France and through the late 1960s and early 1970s in the United States captured the attention of each nation at the time and have had a profound impact on how Americans and French understand their respective states today. Both movements held the lofty goal of completely reshaping their respective societal structures but the vast differences of the cultures in which they were carried out resulted in distinct end results. In France, student protests sparked mass mobilization of the nation and, at their height, were seen by most of the country in a positive light. The broader movement that involved worker participation as well also won material gains for workers in the nation. Across the Atlantic, on the other hand, student protests were met with mostly ill will from the American working class. This work will particularly focus on the ways in which a history of strikes and a popular Communist Party in France both allowed for mass mobilization and stopped the students from pursuing more radical change. It will also work to challenge dominant narratives in political science around coalition building. I. In mid-May, 1968, as 10 million people marched in demonstration through the streets of every major French city, student leader Daniel Cohn-Bendit sat down for a wide-ranging interview with philosopher Jean-Paul Sartre. Bendit cogently articulated his goals for the student movement as well as its potential challenges. “The aim is now the overthrow of the regime,” he said. “But it is not up to us whether or not this is achieved. If the Communist Party, the [general confederation of labor union] CGT and the other union headquarters shared it… the regime would fall within a fortnight.” Six years later and across the Atlantic Ocean, the Weather Underground, a militant leftist organization in its fifth year of operation which was composed of young radicals, published a book entitled Prairie Fire: The Politics of Revolutionary Anti-Imperialism. The Weather Underground wrote, “Our intention is to disrupt the empire… to incapacitate it, to put pressure on the cracks, to make it hard to carry out its bloody functioning against the people of the world, to join the world struggle, to attack from the inside.” II. Radical social movements aimed at the overthrow of capitalism and capitalist-based governments existed throughout the Western world through the late 1960s and early 1970s. In Italy, West Germany, France, and the United States, these movements were particularly wide ranging and distinctly impacted each society, causing momentous political and cultural upheaval. This work will focus on the latter two nations. The mass mobilization that shook France was confined largely to one month: May, 1968. In the middle of March, France’s leading newspaper Le Monde called France’s citizens too “bored” to protest in the same manner that was occurring in West Germany and the United States. A mere six weeks later, after the occupation of the University of Nanterre on March 22nd sparked conversation about collective action around the country, French students occupied the University of Paris at the Sorbonne, in the Latin Quarter of Paris, sparking nightly clashes with the police. Streets were barricaded, all transportation was shut down, and worker mobilization reached a height of 10 million on strike. Notably, students’ grievances were separate from those of the workers. The students rallied around a popular slogan of the time, “all power to the imagination,” which captured their collective interest in enacting changes to the educational system that would allow for a more free and accepting university structure. Comprised of Trotskyites, Maoists, anarchists, and others on the Left, many also believed in the violent overthrow of the 5th Republic of France and the complete reshaping of society. As Suzanne Borde, who in May, 1968 had recently left her childhood home for Paris, said, “Everything changed [in May, 68], my way of thinking, everything… My favorite expression at the time was “La Vie, Vite” (Life, Quickly)! I wanted to change the usual way of life.” The workers, who made up the lion’s share of the protestors but had fewer public clashes with the police, were concerned less with political ideology or societal restructuring than with material gains that would make their lives better, such as wage increases. Their protests ran in conjunction with the students’, but their union was a tenuous one: the French Communist Party (PCF) and its associated labor union Confédération Général du Travail (CGT) controlled much of the political action amongst the workers and was deeply suspicious of the goals of the student movement from its nascent stages. Ultimately, two central events led to the movement’s demise. Maybe ironically, the first was originally interpreted as a success: the protests led to governmental upheaval and President Charles de Gaulle’s temporary departure from the country. After weeks of uncertainty, representatives of de Gaulle’s government negotiated what came to be termed the Grenelle Agreements with the leadership of the CGT. Resulting in more bargaining power for unions as well as a 35 percent minimum wage increase and a 10 percent increase in average real wages, these concessions pacified many workers, leading them back to the factory floor. Second, upon returning to the country on May 30, Charles de Gaulle organized a significant counter-protest on the Champs-Elysees, dissolved the legislature and called for new legislative elections that took place in late June. De Gaulle’s party, the Union of Democrats for the Republic (UDR) won a massive victory and went back to being firmly in control of the nation, while the PCF lost more than half of their seats. Social protest in the United States was not so neatly circumscribed into a few months. Anti-Vietnam War protests took many shapes over numerous years. For the purposes of this work, analysis will be confined to the Students for a Democratic Society (SDS) organization, its offshoot groups, and their respective impacts on the broader movement. Launched with the Port Huron Statement in 1962 before the official beginning of the American War in Vietnam in 1965, the organization purposefully did not couch its goals in traditionally communist or Marxist rhetoric, because unlike in France, there was no appetite for it in the United States. Rather, they argued quite persuasively, “We are people of this generation, bred in at least moderate comfort, housed now in universities, looking uncomfortably to the world we inherit.” While fewer than 100 people signed the Port Huron Statement, by 1965, the SDS organized the “March on Washington to End the War in Vietnam,” which 15,000 to 25,000 people from around the country attended. This march both attracted a degree of attention and trained future organizers of better-coordinated marches on Washington, including the November, 1969 Moratorium March on Washington, which had over 250,000 attendees. While SDS remained a strong political force through the late 1960s, by its 1969 convention in Chicago the group had moved significantly to the left ideologically and had developed political differences amongst itself that detached it from the unified spirit of the Port Huron Statement. As SDS gathered in Chicago, by the end of the weekend of June 18-22, three separate factions had emerged. One, calling itself the Progressive Labor Party (PL), argued for Maoist and worker-oriented solutions to what they perceived as the ills of America. Another, the Revolutionary Youth Movement (RYM), became the foundation of what was eventually called the Weather Underground—they advocated for a radicalization of SDS to fight American imperialism alongside the Black Panthers and revolutionary groups around the world. Finally, the Revolutionary Youth Movement II (RYM II) agreed with RYM on most substantive issues, but believed in a more traditional Marxist approach to solve them. According to sociologist Penny Lewis, none of these groups, including the PL whose entire revolutionary strategy was based on cross-class alliance with workers, enjoyed any significant support from the working class. She writes, “The obvious reason for this was the near-unanimous embrace of Cold War anticommunism in the ranks of labor and the collapse of Communist Party influence within the class.” Left without the possibility of even a tenuous connection between young radicals and the broad working class, the Weather Underground began to participate in militant action to attempt to bring the Vietnam war home. In March of 1970, Weather Underground member Bernardine Dohrn anonymously recorded a transmission and sent it to a California radio station on behalf of the group. She warned, “The lines are drawn… Revolution is touching all of our lives. Freaks are revolutionaries and revolutionaries are freaks… within the next 14 days we will bomb a major U.S. institution.” While her timeline was a bit optimistic, the group bombed the Capitol in March of 1971 and the Pentagon in May of 1972, all the while intending not to injure anyone (these two actions had no deaths associated). Their most famous (and infamous) deed was an accident—also in March of 1970, two members (Diana Oughton and Terry Robbins) accidentally detonated a bomb in a Greenwich Village townhouse while assembling homemade explosives, killing themselves and a third “Weatherman” who was walking into the house (Ted Gold). The Weather Underground did continue action after the conclusion of American involvement in Vietnam in 1975, but paired down much of its more violent activities. The group, whose members found their way to the FBI’s Most Wanted List, eventually disbanded; many now work as professors, educating and informing new generations of American thought. III. The outgrowth of the fragile connection between student protest and worker protest in France, as well as the lack of any significant worker mobilization in the United States, has a lot to do with the way each nation developed in the wake of World War II. During the altercations in May, 1968 in France, President Charles de Gaulle and the PCF represented two opposing poles of influence. This, in many ways, defined the conflict: de Gaulle’s fairly centrist (by modern standards) regime was forced to contend with a popular Communist Party facing a radical push from student activists combined with a wellspring of support from French workers. Interestingly, both De Gaulle and the Communists found much of their legitimacy from their actions a quarter-century prior, during World War II. De Gaulle and his supporters, along with the PCF, were the two most significant resistance forces to the collaborationist Vichy government. As such, in the first legislative election after the War in 1945, the PCF won a plurality of the vote, with 26.1 percent, and controlled the most seats in the legislature. De Gaulle did not participate in these elections. By 1967, while the PCF’s support had diminished, it remained a powerful force: they held 21.37 percent of the vote, a slight drop, but were able to build a governing coalition with fellow Leftist parties Federation of the Democratic and Socialist Left (FGDS) and the Unified Socialist Party (PSU). Together, the three received 53.43 percent of the vote. The revolution in 1968, then, did not come out of nowhere. Not only could the PCF count on at least 20 percent of France’s support throughout the 1950s and 60s, it also organized strikes. Significant agrarian protests led by the PCF occurred in 1959 and 1960, and in 1963 strikes reached a zenith of the era before 1968, as the number of days that workers were on strike was the highest in 10 years. As Kenneth Libbey, who is both a scholar of and an advocate for the PCF, argues, “the belief in the ability of a mass movement to sweep aside obstacles to its success is a dominant theme of the party. Its acceptance makes the arguments about the transition to socialism at least plausible.” By May of 1968, significant differences existed between the often anarchist, Maoist, or Trotskyite student groups and the Stalinist PCF and CGT. However, these disagreements on ideology were not significant enough to halt the cross-coalitional movement—at least at first. In the case of Leftist groups in the United States, whether they marched under the Maoist banner of coalition-building with the working class (in the case of the PL movement) or had more anarchist tendencies as well as interest in engaging with black revolutionary groups such as the Black Panthers (in the case of the Weather Underground), they had very little historical precedent or organizational support upon which to draw. Even at its relative peak in 1944, the Communist Party in the United States (CPUSA) only had a confirmed membership of 80,000. In the context of the Cold War, it became impossible to be an avowed Communist in public life. In a period often called the “Second Red Scare” or “McCarthyism,” the United States Congress convened the House Un-American Activities Committee (HUAC) in order to attempt to find and punish Communists whom they believed to be working for the Soviet Union. In 1954, the United States government formally outlawed the CPUSA. While in the French case the Communist Party was associated with brave resistance to World War II, politicians in the United States were able to successfully present the CPUSA as a subversive group intent on aiding the Russians in the Cold War. As an ideology, McCarthyism faded through the 1950s and was eventually seen for what it was: a witch-hunt. However, in the Cold War context, a genuine Communist Party in the United States would have been something of an anachronism at best. Thus, radicals in the United States had to both divorce themselves from any extremely weak institutions that did exist and strive to create their own culture and identity. The divergent histories of France and the United States shaped not only the popularity of social movements in the late 1960s, but also the strategies and tactics employed by student radicals in both nations. IV. A shared characteristic of the radical students in France and the United States was their distaste for slow-moving, marginal improvements. In fact, French radical students had been preaching this ideology since the early 1960s. Trotskyite dissidents, many of whom were engaged in the leadership of the 1968 movement, submitted a manifesto to the socialist publication Socialisme ou Barbarie in 1961 outlining many of the same principles as the Weather Underground did eight years later. They argued, “One hundred and fifty years of ‘progress’ and ‘democracy’ have proved that no matter what reforms are applied to the capitalist system they will not change the real situation of the worker.” As is typical of the French case, revolutionary politics are more wrapped up in the labor movement than in the United States. The manifesto continues, “The workers will not be free of oppression and exploitation until their struggles have resulted in setting up a really socialistic society, in which workers’ councils will have all the power, and both production and economic planning will be under worker management.” Fredy Perlman, a student who aided in the shutdown of the Censier Annex of the Sorbonne, believed in a direct connection between the actions at the Universities and the larger strikes. He saw the main contribution of the students at the Censier to be the formation of worker-student action committees, in which the two groups coordinated actions together. Perlman, who published a booklet entitled Worker-student Action Committees, France, 1968 in 1970, wrote, “The formation of the worker-student committees coincides with the outbreak of a wildcat strike: ‘In the style of the student demonstrators, the workers of Sud-Aviation have occupied the factory at Nantes.” For Alain Krivine, the founder of one of the most influential activist groups for youth during 1968, Jeunesse Communiste Révolutionnaire, increased rights for workers were essential to the success of the movement. However, he did not believe that leaders of the unions or the Communist Party best represented the workers’ interests. He says, “For me [leftwing political leaders Pierre] Mendès-France and [François] Mitterand were shit… Mendès-France and Mitterand could be an alternative, but for us it was a bad one.” Student demonstrator Isabelle Saint-Saëns largely agrees. “When we marched with the workers we felt united with them, but it remained theoretical as well,” she said. Nevertheless, the students did see the workers as the key to their success, because they were willing to mobilize and they held such tremendous political power because of their sheer numbers. As opposed to the situation in France, protest in the United States was based largely around denouncing the imperialism inherent within the conflict in Vietnam. In the shadow of the SDS convention in June of 1969, student radicals who formed the leadership of the splinter group of the Weather Underground sprang into action. Leadership of the organization included many young radicals who had been involved in the demonstrations against the Vietnam War at Columbia University the year before, including Bill Ayers, Bernardine Dohrn, and Mark Rudd, who famously wrote in a letter to Columbia President Grayson Kirk: “You call for order and respect for authority; we call for justice, freedom, and socialism. There is only one thing left to say. It may sound nihilistic to you, since it is the opening shot in a war of liberation. I’ll use the words of LeRoi Jones, whom I’m sure you don’t like a whole lot: ‘Up against the wall, motherfucker, this is a stick-up.’” The Weather Underground’s first major action,termed the “Days of Rage,” was scheduled to take place from October 8-11, 1969 in the streets of Chicago. The action’s specific purpose was to protest the trial of the “Chicago Eight,” a group on trial for antiwar activism during the 1968 Democratic National Convention. While they hoped for the participation of around 50,000 militants they got only a few hundred. The action, which included the looting and burning of downtown Chicago appeared not to have a particularly cogent mission, was panned by the mainstream media, but also by many fellow Leftist organizations, who argued that the organizers were alienating the broader public from their cause. The Weather Underground itself, though, argued that the “Days of Rage” were part of a larger effort to “bring the war home.” At this point in the antiwar fight, the Weather Underground had decided that they could not count on the participation of workers because of their lack of any significant socialist or communist sympathies. As such, they planned demonstrations and militant actions to raise the consciousness of the greater populace to the horrors of the war abroad. Friends and siblings who were drafted, sent to Vietnam, and often killed in action particularly galvanized American youth. Partially to announce the formation of the Weather Underground, the group released a manifesto entitled “You Don’t Need A Weatherman To Know Which Way The Wind Blows.” A subsection of this argument, “Anti-Imperialist Revolution and the United Front,” states, “Defeating imperialism within the US couldn’t possibly have the content, which it could in a semi-feudal country, of replacing imperialism with capitalism or new democracy; when imperialism is defeated in the US, it will be replaced by socialism- nothing else. One revolution, one replacement process, one seizure of state power- the anti-imperialist revolution and the socialist revolution, one and the same stage.” Student radicals in the United States saw the need to engender violent revolution in order to move to a state willing to accept socialism as a rational political ideology. The stated aims of the two movements, then, were quite similar. Each believed that their government was not truly democratic, and that there was a distinct need to expel the ruling elite from power. The two groups framed the issue using a shared language of the Left that dealt primarily with expressing solidarity with the oppressed. Divergence in the movements appeared in each group’s understanding of their own role in society. In France, while students were suspicious and sometimes downright dismissive of the PCF and the CGT, they believed they needed the participation of the workers (many of whom were members of those organizations) to succeed. The split at the SDS convention in June of 1969, on the other hand, further alienated the Weather Underground even from fellow Leftist organizations. While the Weather Underground hoped to gain more support for its cause amongst the general populace, the group also understood the nature of the political system in the United States and made the conscious decision to exist outside of it. In “You Don’t Need a Weatherman…” they wrote, “How will we accomplish the building of [a Marxist-Leninist Party]- It is clear that we couldn’t somehow form such a party at this time, because the conditions for it do not exist in this country outside the Black Nation.” Much of the reason for both the divergent outcomes as well as the divergent tactics and framing of the student movements in France and the United States have to do with the political opportunity structures that existed in each nation during the late 1960s. These are broadly rooted in the historical differences in the treatment of Communism as an ideology in both nations. V. Many scholars have argued that the character of the revolution of May 1968 was defined by the youth and, to a lesser degree, intellectuals in the nation. Maybe more important for mass mobilization in France, though, was the history of strikes in the nation. According to French historian Stéphane Sirot, while in other nations strikes are often the result of failed negotiations, in France they frequently occur either during or before negotiations with labor bosses. Strikes are such successful tactics of negotiation because they work on two levels. First, they have an offense element through mass demonstrations that attract the attention of the media. Second, they work defensively in that by refusing to work, they put pressure on bosses to find a quick solution. In their paper, “The Shape of Strikes in France, 1830-1960,” published in 1971, scholars Edward Shorter and Charles Tilly argue that French strikes, while fairly prevalent throughout this period, changed fairly significantly in character in this time period. This, according to Shorter and Tilly, has largely to do with the significant expansion of industrial unionism at the end of the 1930s around the European continent. They use measurements of size, duration, and frequency to calculate the shape that these strikes took. Below is an example of their model: Table 1.1 This table shows two distinct strike scenarios. What Shorter and Tilly refer to as “Industry X” represents a scenario in which strikes are long but small and occur fairly infrequently. “Industry Y” has strikes that occur more frequently and with a larger size, but do not last for as long. By the 1960s in France, the model for strikes looked quite a bit more like “Industry Y” than “Industry X.” Below is, once again, Shorter and Tilly’s graphic explanation of this phenomenon, based on the historical cataloguing of strikes: Table 1.2 This is significant in that massive, short demonstrations, while not necessarily more successful than those that are smaller and play out over a longer period of time, are wont to receive more attention from the public and the media due to their dramatic nature. The sheer mass of strikes through the 1960s made it easier for workers to mobilize around issues that ran adjacent to the concerns of the students, such as rights to self-management in any workplace, but were certainly not the same. Conversely, in the United States before 1968 there were few examples of large scale strikes. Other than the steel workers’ strike in 1959, which included around half a million participants, frequent general strikes had not existed in the nation since the 19th Century. Additionally, while union activity was certainly stronger in the 1960s in the United States than it is today, the protests of the 1960s were more focused on the antiwar effort than the rights of workers. VI. Likely due at least partially to their comfort with general strikes and mass mobilization, the French populace largely supported the students and their efforts to protest, expressing ire for the police force when they clashed. On May 10, 1968, in what has since been termed the “night of the barricades” (because of barriers that students constructed to slow down police), French police and students clashed violently in the streets of Paris. 80 percent of Parisians, though, supported the students and believed fault in escalating the violence lay with the police. Nevertheless, cultural differences between the youth and both the ruling class and worker allies persisted in France as well, which manifested themselves in the priorities of the students. Before the revolution of 1968, the French schooling system was extremely restrictive. Students could not voice their own ideas in the classroom and the gender and sexual politics of the university were also extremely conservative—men and women were often divided. Thus, in considering how all of French society should change, the University system was at the front of many students’ minds. As Perlman argued about the revolutionary movement, “What begins [when the Universities are occupied] is a process of collective learning; the "university," perhaps for the first time, becomes a place for learning. People do not only learn the information, the ideas, the projects of others; they also learn from the example of others that they have specific information to contribute, that they are able to express ideas, that they can initiate projects. There are no longer specialists or experts; the division between thinkers and doers, between students and workers, breaks down. At this point all are students.” As might be expected, while many supported the broad protests of the students and their right to do so, concepts like the total change in University structure, for which Perlman argued, were less popular or important to much of French society. Thus, the French students created their own political ideology and culture that was often separate from that of the more institutionalized labor movements. However, while their culture and their priorities often separated them from the workers, the French students also believed the workers to be necessary to their success. When the Grenelle Accords were signed and a majority of the workers agreed to go back to work, students quickly demobilized. As scholar Mitchell Abidor argues in the introduction to his oral history May made me, “For the workers, it was not the qualitative demands of the students that mattered, but their own quantitative, bread-and-butter issues.” Ultimately, French students were incapable of understanding or accepting this. Abidor continues, “The ouvriérisme—the workerism—so strong on the French left led the students to think the workers were the motor of any revolution, which left the vehicle immobile because the engine was dead.” So, after the workers returned to work, the students also quite quickly demobilized. The alliance between the students and the workers in France was further complicated by the students’ tenuous relationship to the PCF and CGT, organizations which were active participants in the society that students were striving to upend. The PCF and CGT, naturally concerned with their parties’ success, framed their arguments and made agreements based on the existing political opportunity structure in France. Many student radicals, on the other hand, saw it as their charge to revise those very structures. The PCF was thus forced to walk a fine line between maintaining its own institutional legitimacy and representing the more revolutionary elements of its own party. According to Libbey, French Professor Georges Lavau thus argues, “[the PCF] has assumed the role of tribune: articulating the grievances of discontented groups as well as defending the gains of the workers against attempts by the bourgeoisie to undermine them. The PCF has thus become a legitimate channel for protest, protecting the system from more destructive outbursts. This protection failed in 1968, of course, but Lavau contends that the party’s role of tribune nonetheless coloured its response to the crisis.” Lavau and Libbey’s contention that the PCF lost the role of tribune in May of 1968 is worth noting because although the CFDT and the CGT were the ones to negotiate with de Gaulle’s government, they had lost control of the situation. They were able, ultimately, to demobilize the workers, but they lost significant support, which showed in the elections of June, 1968 where they lost half of their seats. The Grenelle Accords in many ways crystallized the differences between the gauchiste students and the institutionalized, Stalinist political parties. These differences, which existed throughout the movement, were momentarily put aside as everyone took to the streets. After most workers returned to the factory floor, though, student radicals, as well as radical elements within the Communist Party, discussed their disappointment with the limited scope of the Grenelle Accords. Prisca Bachelet, who was helped to organize the nascent stages of the movement during demonstrations at the University of Nanterre on March 22, 1968, said of the leaders of the CGT, “they were afraid, afraid of responsibility.” Éric Hazan, who was a cardiac surgeon and a radical Party member during 1968, argued the Communists’ actions at the end of May and their negotiations with the government amounted to “Treason. Normal. A normal treason.” Student Jean-Pierre Vernant argued, “The May crisis is not explained and is not analyzed [by the Party]. It is erased.” The students and their allies had good reason for frustration. They believed the Party theoretically meant to represent them betrayed many of the principles for which they were fighting. Members of the Communist Party also quite obviously held distaste for many of the student radicals. In a very obvious reference to the student movement, Communist Party leader Roland Leroy said at the National Assembly on May 21, 1968, “The Communists are not anarchists whose program tends to destroying everything without building anything.” For their part, the students’ significant miscalculation, was that they believed Party leaders like Leroy did not speak for the interests of the workers. Hélène Chatroussat, a Trotskyite, argued at the time, “I said to myself, [the workers] are many, they’re with us… so why don’t they tell the Stalinists [the PCF] to get lost so we could come in and they could join us?” To the contrary, many of the workers who went on strike in the factories were uninterested in broader political change or politics in general. They simple hoped for a positive change to their material conditions. As Colette Danappe, a worker in a factory outside Paris, told Mitchell Abidor, “The students were more interested in fighting, they were interested in politics, and that wasn’t for us.” Danappe continued about the Grenelle Accords, “We got almost everything we wanted and almost everyone voted to return… Maybe we were a little happier, because we had more money. We were able to travel afterwards.” At first glance, it would appear that the situation in the United States and the goals of antiwar demonstrators would have made it easier to mobilize a broader cross-section of the population. By mid-May of 1971, 61% of Americans responded “Yes, a mistake” to the Gallup poll question, “In view of developments since we entered the fighting in Vietnam, do you think the U.S. made a mistake in sending troops to fight in Vietnam?” However, a larger segment of the older population in the United States was against the war than the younger generation. These older Americans did not support the war, but largely did not support protest movements either. The lasting images of social movements in the United States in the 1960s all include what came to be referred to as “the counterculture.” The counterculture is depicted, stereotypically, as young men and women with flowers in their hair, listening to Creedence Clearwater Revival, and holding radical aspirations for the dawn of a new age in America. This group was generally maligned by significant portions of older generations of Americans in particular, who believed the youth movement to be related more to drug use than to any serious concern. While the counterculture’s goals of promoting peace and community were in many ways quite sincere, with the fear of the draft adding to their outrage, an older generation of Americans refused to take their style of protest seriously. Table 1.3 This table explains mobilization. The situation in France in May of 1968 can be found in the bottom-right box: the broad-based grievances of students were largely supported and they found political allies in the labor and Communist parties. In the United States, mass mobilization did not occur on the same scale, because although the popularity of the grievance was high (as support for the American War in Vietnam was low), no significant political allies (who could have been found in the older generation of anti-war Americans) existed. This situation can be found in the top-right box. This disdain for the youth movement was made obvious in the way that Walter Cronkite and Dan Rather covered clashes at the 1968 Democratic National Convention in Chicago. Members of the counterculture movement, calling themselves “Yippies” (included in this group were many members of the SDS), descended onto Chicago to protest the Vietnam War and the lack of democracy inside the Democratic Party’s presidential nomination selection. Cronkite had already argued on air that the Vietnam War had become unwinnable, but when he and Rather covered the 1968 DNC together, their attention was focused on normative politics as a whole—and they quite obviously had very little respect for the protestors. Each argued that it was the Yippies who provoked a bloody confrontation with the police, with Rather stating that, “Mayor Richard Daly vowed to keep it peaceful, even if it took force to keep the peace. He was backed by 12,000 police, 5,000 national guardsmen, and 7,500 regular army troops. But the Yippies succeeded—they got their confrontation.” Through the 1960s, many protest and counter-culture groups (including the Student Nonviolent Coordinating Committee, Americans for Democratic Action, and Vietnam Veterans Against the War, to name a few) created and sustained significant cultural differences from much of American society. Members of the Weather Underground, despite some of their uniquely militant positions, dressed and spoke in a language that was common to the broader counterculture movement. They did so largely because they felt themselves unable to work within the boundaries of a political system that, even on the left, did not come close to representing their political ideology. In forming their own cultural identity, Leftist groups in the United States did manage to catch the attention of the masses, even if that attention was largely negative. In this way, their issues and demands were placed at the center of the conversation, causing a fraught societal debate. VII. The legacies of the social movements of the late 1960s in the United States and France are hotly debated. Historian Tony Judt, holding an unmistakable disdain for the student movement in France, wrote, “It is symptomatic of the fundamentally apolitical mood of May 1968 that the best-selling books on the subject a generation later are not serious works of historical analysis, much less the earnest doctrinal tracts of the time, but collections of contemporary graffiti and slogans. Culled from the walls, noticeboards and streets of the city, these witty one-liners encourage young people to make love, have fun, mock those in authority, generally do what feels good—and change the world almost as a by-product… This was to be a victimless revolution, which in the end meant it was no sort of revolution at all.” On the other hand, scholar Simon Tormey wrote about the events of May of 1968, “1968 represented a freeing up of politics from the congealed, stodgy and unimaginative understandings that had so dogged the emergence of an oppositional politics after the second world war. It unleashed a wave of joyous experimentation, evanescent and spontaneous efforts to challenge the dull routine of the repetitious lives that had been constructed in and through advanced capitalism.” As we can see, this duality of point-of-view about revolutionary movements existed both in France and the United States. While the Weather Underground, without any significant political allies and carrying a negative media portrayal from the press, has mostly been portrayed negatively in the years since, some scholars believe that they altered a broader American consciousness. As Arthur Eckstein writes, “Thousands of New Leftists agreed with the Weathermen’s analysis of what had gone awry in America… the last 50 years have seen remarkable progress in black rights, women’s rights, gay rights, Hispanic and Asian rights… Weatherman’s violence... did not impede that progress.” Although Eckstein certainly does not offer a ringing endorsement of their militant tendencies, he does argue here that the group spawned social progress in a way that they did not expect they would. Interestingly enough, these more positive interpretations from historians and political scientists contradict the feelings of the student radicals themselves. Neither group had an exact moment of demobilization, but it became increasingly clear to young leaders throughout the early 1970s that they had not fomented the change for which they had hoped. In France especially, a growing frustration existed towards the Communist Party and its Labor wing, which points quite obviously to the dangers of coalition building. Students’ purported political allies came to be thought of as traitors by many of the student radicals. These frustrations and divisions that were born in 1968 proceeded, if not directly led, to the French Communist Party’s long slide into irrelevance during the 1970s and 80s, as Abidor argues. He writes, “Once it lost the PCF as the mediating force to represent its grievances, the French working class fulfilled Herbert Marcuse’s 1972 warning that “The immediate expression of the opinion and will of the workers, farmers, neighbors—in brief, the people—is not, per se, progressive and a force of social change: it may be the opposite.” The PCF understood this latent conservatism in the working class of 1968. Not so the New Left student movement.” The coalition was successful very briefly in May and resulted in positive material gains for workers—through pay raises, France became a little bit more equal. The most significant legacies of movements in France and the United States, though, were separate from any coalition. The French and the American students, each galvanized to be part of the revolutionary vanguard and inspired to change their societies, felt a deep sense of disappointment after the events of the late 1960s. Broken alliances and dashed goals led to the perception that they had let themselves and their ideals down. Measured this way, revolution failed, and Judt is right to argue that in this context, “it was no sort of revolution at all.” A middle ground perspective is well-explained by May ’68 protestor Suzanne Borde, who noted, “It made it possible to change the way children were educated, leading many teachers to reflect and to teach differently. Experimental schools opened... But it had no consequences on political life and failed to changed anything real.” Holding a completely different interpretation of the outcome, Maguy Alvarez, an English teacher in France, told New York Times journalist Alissa Rubin, “Everything was enlarged by 1968; it determined all my life.” Rubin titled the article “May 1968: A Month of Revolution Pushed France Into the Modern World.” So, maybe “these witty one-liners [that encouraged] young people to make love, have fun, mock those in authority, generally do what feels good,” did change France as a byproduct. The kicker of Alvarez’s quote is that she told it to Rubin not as she was deeply examining the political consequences of the era, but as she was walking through an exhibition of posters and artworks from the period. During his interview with Borde, Abidor noted towards the end of the discussion, “May ’68 didn’t result in anything concrete, then.” Borde responded, “Sure it did. It completely changed the way I live.” VIII. Much of the existing literature in the field of social movement theory is concerned with the ways in which social groups successfully frame their movements to a broader public in order to increase popular support, political allies, and best take advantage of existing political opportunity structures. This work, although not formatted with a traditional structure of similar systems design, is concerned with the comparison of a social movement that attempts to tap into public support (French student movement) with another that appears to at times actively avoid building coalitions (the Weather Underground). More than anything else, the historical differences in France and the United States led to vastly different political opportunity structures for each social movement in the late 1960s. Yet neither group compromised their idealistic political ideology, and for this reason both groups failed to achieve their ultimate goals. Nevertheless, both did change cultural aspects of the societies in which they operated. The conclusion of these movements’ cultural success, despite their political failure, challenges existing social movement literature that argues that successful social movements should attempt always to build broad support. French student radicals found cultural success not because of their coalition with the working class but often despite it. In the United States, much of the lasting memory of the SDS occurred after it split into the Weather Underground. Certainly, a degree of this remembrance is negative—French student radicals with their “power to the imagination” are remembered in a much rosier light than the Weather Underground, which is often considered a terrorist organization in the United States. However, the Weather Underground and its writings continue to inspire generations of young activists, who do not necessarily ascribe to their militant tactics but are inspired by its political ideology. Coalition building can without a doubt aid in the success of a social movement. However, it can also at times minimize its impact. As we examine these two distinct approaches to creating change, our analysis shows that coalition building might support the historical imagination, but it can hinder change. IX. Since the financial crisis of 2008, questions of the value of coalition building have continued to roil activists, in particular in the United States, which precipitated the 2008 global financial crisis and now exists in a period of unstable economic and political development that scholars have called a “crisis of neoliberalism.” Current social protest movements have faced some of the same issues confronting protestors in the 1960s and early 1970s—the Occupy Wall Street movement presents a worthy case study. In many respects, the Occupy movement is the closest analog in recent history to the May 1968 movement in France. Sparked by young people, the protests were concerned with income inequality and were able to create an entirely new language to talk about money in this country through popular slogans—“we are the 99%.” Branding itself a revolutionary movement, Occupy eschewed traditional leadership structures and declared an “occupation of New York City” on September 29, 2011 which resulted in a series of clashes with the police and ended in the protestors being forced out of their home base of Zuccotti Park on November 15 of the same year. Protests continued for months afterwards around the world, but did not maintain the same sort of zeal as they did in September, October, and November of 2011. While the Occupy movement quickly burned and petered out in a similar way to May 68, its results are of a somewhat different character than those in France and are thus worth examining here. Most significantly, the United States government was never forced to come to the bargaining table with Occupy, and their leaderless movement has been criticized for never laying out concrete demands. Additionally, though, the amorphous nature of the group allowed it to buck trends of significant splintering along ideological lines—post-Occupy activism has simply dispersed to campaigns like #AbolishICE and protesting the Keystone XL Pipeline. Its greatest success has likely been the proliferation of discussion of income inequality in the United States, which has led to campaigns for an increased minimum wage. However, in a similar way to the student protestors in France, questions remain as to whether “we are the 99%” has been honored or coopted. Hillary Clinton launched her 2016 presidential campaign in Iowa with the statement “the deck is still stacked in favor of those at the top.” Ted Cruz highlighted in the lead-up to 2016 “the top 1% earn a higher share of our income nationally than any year since 1928” and Jeb Bush said “the income gap is real.” The rhetoric is well and good, but each of these politicians has, according to Occupy, aided in the widening of this gap. There are positive messaging lessons to be learned from the Occupy movement for other protest groups, but in many respects Occupy lost control of the narrative—the shrinking 1% now speaks for the 99%. Bibliography: Abidor, Mitchell. May made me: an oral history of the 1968 uprising in France. Chico: AK Press, 2018. Abidor, Mitchell. “1968: When the Communist Party Stopped a French Revolution.” New York Review of Books. April 19, 2018. https://www.nybooks.com/daily/2018/04/19/ . Alterman, Eric. “Remembering the Left-Wing Terrorism of the 1970s.” Review of Days of Rage by Bryan Burrough. The Nation, April 14, 2015. https://www.thenation.com/remembering-left-wing-terrorism/ . Asbley, Karin, Bill Ayers, Bernardine Dohrn, John Jacobs, Jeff Jones, Gerry Long, Home Machtinger, Jim Mellen, Terry Robbins, Mark Rudd, and Steve Tappis. “You Don’t Need A Weatherman To Know Which Way The Wind Blows.” New Left Notes, June 18, 1969. https://archive.org/stream/YouDontNeedAWeatherman . Berger, Dan. Outlaws of America: the Weather Underground and the politics of solidarity. Oakland: AK Press, 2006. da Silva, Chantal. “Has Occupy Wall Street Changed America?” Newsweek. September 19, 2018. DeBenedetti, Charles. An American Ordeal: The Antiwar Movement of the Vietnam Era. Syracuse: Syracuse University Press, 1990. Drake, David. “Sartre and May 1968: The Intellectual in Crisis.” Sartre Studies International. Volume 3, No. 1, 1997. 43-65. Duménil, Gérard and Dominique Lévy. The Crisis of Neoliberalism. Cambridge, MA: Harvard University Press, 2011. Eckstein, Arthur M. “How the Weather Underground Failed at Revolution and Still Changed the World.” TIME, November 2, 2016. http://time.com/4549409/the-weather-underground-bad-moon-rising/ . Gautney, Heather. “What is Occupy Wall Street? The history of leaderless movements.” Washington Post. October 10, 2011. https://www.washingtonpost.com/national/on-leadership/what-is-occupy-wall-street-the-history-of-leaderless-movements/2011/10/10/gIQAwkFjaL_story.html?utm_term=.44928aed6c6e . Gitlin, Todd. The Sixties: Years of Hope, Days of Rage. New York: Bantam, 1987. Gregoire, Roger and Fredy Perlman. Worker-student Action Committees, France, May 1968. Paris: Black & Red, 1970. History.com Editors. “Chicago 8 trial opens in Chicago.” A&E Television Networks, November 16, 2009. https://www.history.com/this-day-in-history/chicago-8-trial-opens-in-chicago . Honigsbaum, Mark. “The Americans who declared war on their country.” The Guardian, September 20, 2003. https://www.theguardian.com/film/2003/sep/21/ . Horowitz, Irving Louis. “Culture, Politics, and McCarthyism.” The Independent Review. Volume 1, No. 1, Spring 1996. 101-110. Investopedia. “The 10 Largest Strikes in U.S. History.” 2012. https://www.investopedia.com/slide-show/10-biggest-strikes-us-history/ . Judt, Tony. Postwar: A History of Europe Since 1945. New York: Penguin, 2005. Judt, Tony. Marxism and the French Left: Studies in labour and politics in France, 1830- 1981. New York: Oxford University Press. 1986. Kann, Mark E. The American Left: Failures and Fortunes. New York: Praeger Publishing, 1982. Kleinfeld, N.R. and Cara Buckley. “Wall Street Occupiers, Protesting Till Whenever.” New York Times. September 30, 2011, https://www.nytimes.com/2011/10/01/nyregion/wall-street-occupiers-protesting-till-whenever.html?_r=1&ref=occupywallstreet . Levitin, Michael. “The Triumph of Occupy Wall Street.” The Atlantic. June 10, 2015. https://www.theatlantic.com/politics/archive/2015/06/the-triumph-of-occupy-wall-street/395408/ . Lewis, Penny. Hardhats, Hippies, and Hawks: The Vietnam Antiwar Movement As Myth and Memory. Ithaca: Cornell University Press, 2013. Libbey, Kenneth R. “The French Communist Party in the 1960s: An Ideological Profile.” Journal of Contemporary History. Volume 11, No. 1, January 1976. 145-165. McPartland, Ben. “So why are the French always on strike?” The Local, March 31, 2016. https://www.thelocal.fr/20160331/why-are-french-always-on-strike . Montgomery, David. “Strikes in Nineteenth Century America.” Social Science History. Volume 4, No. 1, 1980. 81-104. New World Encyclopedia. “Communist Party, USA.” 2017. http://www.newworldencyclopedia.org/entry/Communist_Party,_USA . Poggioli, Sylvia. “Marking the French Social Revolution of ’68.” NPR, May 13, 2008. https://www.npr.org/templates/story/story.php?storyId=90330162 . Political Statement of the Weather Underground. Prairie Fire: The Politics of Revolutionary Anti-Imperialism. United States: Communications Co. Under Ground, 1974. https://archive.org/stream/PrairieFire/ . Politics Newsmakers Newsletter. “Students for a Democratic Society (SDS).” Public Broadcasting Service, 2005. https://www.pbs.org/opb/thesixties/topics/politics/newsmakers_1.html . Rather, Dan and Walter Cronkite. “ARCHIVAL VIDEO: Protests Turn Violent at the 1968 Democratic National Convention.” For CBS News, uploaded March 14, 2016 to ABC News. https://abcnews.go.com/Politics/video/archival-video-protests-turn-violent-1968 . Revelations from the Russian Archives. “Soviet and American Communist Parties.” United States Library of Congress, August 31, 2016. https://www.loc.gov/exhibits/archives/sova.html . Rubin, Alissa J. “May 1968: A Month of Revolution Pushed France Into the Modern World.” New York Times, May 5, 2018. https://www.nytimes.com/2018/05/05/france-may-1968/ . Rudd, Mark. “Letter to Columbia President Grayson Kirk,” April 22, 1968. In “‘The Whole World Is Watching’: An Oral History of the 1968 Columbia Uprising” By Clara Bingham. Vanity Fair, April 2018. https://www.vanityfair.com/news/2018/03/the-students-behind . Saad, Lydia. “Gallup Vault: Hawks vs. Doves on Vietnam.” Gallup, May 24, 2016. http://news.gallup.com/vault/191828/gallup-vault-hawks-doves-vietnam.aspx . Saba, Paul. “SDS Convention Split: Three Factions Emerge.” The Heights, July 3, 1969. https://www.marxists.org/history/erol/ncm-1/bc-sds.htm . Sartre, Jean-Paul and Daniel Cohn-Bendit. “Jean Paul Sartre Interviews Daniel Cohn- Bendit, May 20, 1968.” Verso, May 16, 2018. https://www.versobooks.com/blogs/3819/ . Schnapp, Alain and Pierre Vidal-Naquet. The French Student Uprising: Nov. 1967-June 1968. Translated by Maria Jolas. New York: Beacon Press, 1971. Seidman, Michael. The Imaginary Revolution: Parisian students and workers in 1968. New York: Berghahn Books, 2004. Seidman, Michael. “Workers in a Repressive Society of Seductions: Parisian Metallurgists in May-June 1968.” French Historical Studies. Volume 18, No. 1, 1993. 255-278. Shorter, Edward and Charles Tilly. “The Shape of Strikes in France, 1830-1960.” Comparative Studies in Society and History. Volume 13, No. 1, January 1971. 60- 86. Silvera, Alain. “The French Revolution of May 1968.” The Virginia Quarterly Review. Volume 47, No. 3, 1971. 336-354. Stöver, Philip and Dieter Nohlen. Elections in Europe: A Data Handbook. London: Oxford University Press, 2010. The Learning Network. “Nov. 15, 1969 | Anti-Vietnam War Demonstration Held.” New York Times, November 15, 2011. https://learning.blogs.nytimes.com/anti-vietnam-war-demonstration-held/ . Tarrow, Sidney. Power in Movement: Social Movements and Contentious Politics. New York: Cambridge University Press, 1994. Tormey, Simon. “Be realistic—demand the impossible: the legacy of 1968.” The Conversation, February 14, 2018. https://theconversation.com/be-realistic-demand-the-impossible . Varon, Jeremy. Bringing the War Home: the Weather Underground, the Red Army Faction, and Revolutionary Violence in the Sixties and Seventies. Berkeley: University of California Press, 2004.

  • Can Pascal Convert the Libertine? An Analysis of the Evaluative Commitment Entailed by Pascal's Wager

    Neti Linzer Can Pascal Convert the Libertine? An Analysis of the Evaluative Commitment Entailed by Pascal's Wager Neti Linzer While Pascal’s wager is commonly approached as a stand-alone decision theoretic problem, there is also a crucial evaluative component to his argument that adds oft-overlooked complexities. Though we can formulate a response to these challenges by drawing on other sections of the Pensées, an examination of an argument from Walter Kaufmann highlights enduring difficulties with this response, leading to the conclusion that Pascal lacks the resources to convincingly appeal to the libertine’s self-interest. I. Introduction Pascal’s wager, an argument due to the 17th-century mathematician and philosopher, Blaise Pascal, is generally analyzed as a self-contained, formalizable problem, embodying one of the first applications of decision theory (1). In short, it calculates the expected utility of believing in God against that of not believing, and concludes that, inasmuch as rationality entails maximizing expected utility, i.e. making the decision that will most likely lead to the most preferable outcome, it is rational for us to believe in God (2). This is a “wager” insofar as we cannot know with certainty that God exists, and the most we can do is gamble on the fact that He does. But what I will argue is that the wager argument presupposes a certain evaluative commitment, which Pascal’s targeted audience, the ‘libertine,’ notably lacks (3). The libertine is someone who does not believe in God, and whose value system is instead oriented towards earthly, bodily, happiness. I claim that for someone thus constituted, Pascal’s wager fails to be convincing. The wager, however, is only one part of Pascal’s never-finished apologetic project, the preliminary notes of which are organized in the Pensées, meaning ‘Thoughts.’ I will show that if we examine some of the other arguments Pascal makes throughout the Pensées, then we can formulate a response to this objection on Pascal’s behalf. As Pascal describes her, the libertine is deeply unhappy when she thinks about the contingencies of the human condition, and she therefore values activities which entertain her and divert her from these disturbing thoughts. In his description of the libertine’s condition, Pascal p erforms something of a Nietzschean style ‘revaluation’ of this approach to life: it includes a destructive phase—in which Pascal argues that the libertine’s values are based on false p resuppositions—followed by a constructive phase—in which Pascal presents the libertine with a more attractive evaluative framework. Once she is in this new cognitive space, the libertine is p repared to be persuaded by the wager. I argue, however, that inasmuch as there are alternative ways for the libertine to revalue her mortality, Pascal fails to make an argument that will necessarily appeal to her self-interest. Drawing on the work of the 20th-century philosopher Walter Kaufmann, I argue that the libertine can instead revalue her mortality by embracing it, by recognizing the way in which the fact of her death is precisely what makes her life worthwhile. And while Kaufman’s approach certainly might also fail to be convincing it at least offers a viable alternative, and has two advantages over Pascal’s: (i) it draws on known facts (our mortality) rather than theoretical possibilities (an immortal soul), and it does not require any kind of wager. The upshot is that, while thedestructive phase of Pascal’s ‘revaluation’ may have been successful, the success of the constructive phase is dubious. As an appeal to the libertine’s self-interest, the wager falls short. The first section of this paper presents the objection to Pascal’s argument, the second section develops a response on Pascal's behalf, and the final section presents enduring difficulties with Pascal’s argument by introducing Kaufmann’s alternative approach. II. The Libertine’s Objection to Pascal’s Wager Crucially, Pascal’s wager is written in a language that the libertine will understand—the language of self-interest. We can summarize Pascal’s argument by saying that the libertine’s current lifestyle can, at most, offer her finite happiness: “what you are staking is finite.” If she gambles on belief in God, however, then the libertine opens herself up to the possibility of gaining infinite reward, and, as Pascal puts it, “all bets are off wherever there is an infinity.” As long as there are not infinitely greater chances that God doesn’t exist, than that God does exist, then, Pascal urges the libertine that, “there is no time to hesitate, you must give everything.” Pascal thereby appeals to the libertine’s instrumental rationality by identifying what it is that the libertine intrinsically desires—namely, her own “beatitude” (4)—and then by arguing that in order to truly satisfy this desire, the libertine must wager on belief in God (5). But there is a catch: the infinite happiness guaranteed by God is incomparable to any form of finite happiness that the libertine now enjoys. This is certainly true after the libertine accepts the wager, since belief in God demands that the libertine radically transform her lifestyle, substituting the dictates of her own will for the dictates of God’s. But I will argue that choosing to accept the wager requires the libertine to undergo what is arguably an even more dramatic transformation: she must transform her value system. This is because the wager does not just promise the libertine more happiness, but rather, it promises her qualitatively different happiness. And the wager only works if the libertine values this sort of happiness. It is true that Pascal never specifies what he means by “an infinite life of infinite happiness,” but inasmuch as he believes that it is the result of a life of faith, we can assume that he is referring to a traditional Catholic conception of heaven. Consider, then, the following reply in the mouth of Pascal’s libertine: an infinite life with God sounds absolutely miserable! First of all, inasmuch as my happiness is derived, at least in part, from the enjoyment of bodily pleasures, I cannot imagine being happy without my body. Happiness means hunting expeditions, games of cards, lavish feasts, and good company—where can I find those in heaven? Moreover, God promises to unite with believers in heaven. But why should I want to unite with God? You are offering me something that satisfies absolutely none of my desires. My life would not be better if God existed, even, (and this is crucial), if God rewarded me as a believer! Pascal’s wager works by presenting the libertine with a gamble: if God exists, there will be infinite happiness for those who believe and infinite misery for those who do not. This is because God promises to reward believers by uniting with them in heaven, and punishing non-believers by burning, or otherwise punishing them, in hell. But from the libertine’s perspective, there is no gamble: the prospects of heaven and hell are both unattractive, and since we are dealing with infinite amounts of time, they are both infinitely distressing prospects. There is therefore nothing worth gambling on. We might try to assure the libertine that once she is a believer, she will desire eternal life in heaven. We often persuade people to do something by promising that they might enjoy it, even if right now they cannot understand why. To take a mundane example, you might happily follow the recommendation of a friend to try a new food, even if you cannot imagine what it would be like to eat it. True, the stakes of this decision are qualitatively lower, but the same epistemic uncertainty seems to be at play: you cannot know whether you appreciate this food until you taste it, and you also cannot know whether you value a relationship with God until you attempt to build one. Inasmuch as wagering on the food does not involve any sort of evaluative transformation on your part, wagering on God might be the same way. But, there is a disanalogy between the two cases. Pascal is presenting the libertine with a certain decision matrix in which Pascal assigns an infinitely positive value to heaven and an infinitely negative value to hell (6). In order for the libertine to assign the same values to the given outcomes in the matrix, she must transform her evaluative framework, so that this-worldly happiness is no longer her highest value. The case of the new food, however, does not require a transformation of this sort. You know that you will either like or dislike the food, and you know that you value eating food that you like and disvalue eating foods that you do not like. Of course, there is still a gamble involved in trying the food since it is impossible to know how you will feel about its taste (7).But crucially, this puts you in a position that is analogous to the libertine considering Pascal’s wager only provided that she has already made the necessary evaluative transformation. It does not put you into the position of a standard libertine, who values her current happiness above all else, and therefore does not see anything to gamble for. Let’s describe a case that would be more analogous to the wager. Henrietta is a principled ascetic, meaning that she values abstention from earthly pleasures to whatever extent possible. As such, she adheres to a strict diet of only bread and water. She has sworn off earthly pleasures and adheres to a strict diet of bread and water. Suppose that her cousin, Henry, a food connoisseur, wants to convince her to try some caviar. He knows that he has never tasted caviar before, but he argues that, given her expected utility calculations, those who eat caviar enjoy it so much that he stands to gain more than lose from trying the caviar. But of course, even if Henrietta thought that Henry’s calculations were correct, they would be meaningless to her. As a matter of principle, she does not value the sensual pleasure provided by eating delicious food. Therefore, the experience of enjoying the food might be even more negative for Henrietta than the experience of disliking it, inasmuch as she has moral disdain for sensual pleasure. Henry’s calculations will only be persuasive if Henrietta abandons her current ascetic values and adopts a more hedonistic lifestyle. This is similar to the situation that the libertine finds herself in when presented with Pascal’s wager. Just as it would be meaningless to convince Henrietta to eat caviar by convincing her to abandon her ascetic lifestyle, to suggest that the libertine will desire heaven if she is a believing Christian is to reformulate the challenge rather than to address it. By formulating the libertine’s challenge this way, we realize just what Pascal’s wager requires: before the libertine can decide to wager on God’s existence, she must first revolutionize her evaluative framework, performing what the philosopher Friedrich Nietzsche would refer to as a “revaluation of values,” i.e. a complete reversal of her normative commitments. At present, a religious lifestyle is not in the libertine’s self-interest; the libertine’s conception of happiness is tethered to her physical existence in this world, and therefore she will not be moved by promises of her soul being rewarded in another world. Now that we have established that the libertine must be induced to reassess her values before she can be persuaded to wager on God’s existence we must ask: does Pascal present the libertine with such an argument? III. Pascal’s Revaluation There is an inherent challenge in trying to influence someone to “revalue their values”: namely, identifying which values one can appeal to in formulating the argument. Generally, pragmatic arguments like Pascal’s wager take the agent’s values as a starting point, and then proceed to demonstrate that a certain action will do a better job at furthering the agent’s values. But if we use values as a starting point, how can we cogently provide someone with practical reasons to adopt a wholly new evaluative framework, without invoking the very values that they do not yet possess? To see how we might formulate a “revaluation” without recourse to other values, we can draw inspiration from Friedrich Nietzsche, whose philosophical undertaking was just that: a revaluation of all values. In his work, Nietzsche’s Revaluation of Values: A Study in Strategies, contemporary Nietzsche scholar, E.E. Sleinis, analyzes the various strategies that Nietzsche uses to achieve his evaluative revolution. One strategy that he discusses, “destruction from within,” undermines a certain value by revealing that it is internally inconsistent (8). This undermines the value on its own terms. There are a few different permutations of this strategy. One, which Sleinis refers to as “false presuppositions,” aims to show that “the value requires a fact to obtain that, as it turns out, fails to obtain.” In attacking the factual, rather than the evaluative component of the value system, Nietzsche is able to undermine it from within, without recourse to other values. For example, Nietzsche devalues “disinterested contemplation as the ideal of aesthetic contemplation” by arguing that humans are simply incapable of disinterested contemplation. We cannot disengage from our passions, emotions, and other interests when we contemplate works of art. “We can put this point in more graphic terms,” explains Sleinis, by arguing that “the pure aesthetic contemplator is a fiction" (9). In what follows, I will demonstrate how Pascal launches a similar attack on the libertine’s value system by arguing, in a parallel manner, that the happy libertine is a fiction. As mentioned, the wager is merely a part of Pascal’s broader apologetic project, and it is within this broader project that Pascal employs this Nietzschean revaluation strategy. There are many notes in the Pensées devoted to bemoaning the wretchedness of the libertine’s condition, and arguing that man simply cannot be happy without God. And while we do not know where Pascal would have placed these ideas (if at all) in his final work, we can still argue that, Pascal’s intentions aside, they do an excellent job preparing the libertine to be receptive to the wager. Once Pascal convinces the libertine that her approach to life was premised on a false presupposition, he is able to urge her to gamble on a new one. Pascal undermines the libertine’s approach to life—happiness derived from entertainment or diversions as the ideal of happiness—in the same way that Nietzsche undermines disinterested contemplation as the ideal of aesthetic contemplation: he shows that humans are incapable of achieving happiness through their diversions (10). While traces of this argument are evident throughout the Pensées , Pascal’s most sustained argument for it appears in his section “Diversions.” After examining this argument, we will turn to the possibility of an alternative response on behalf of the libertine in the spirit of philosopher Walter Kaufmann. Pascal presents us with an imagined dialogue, presumably between a believer and a libertine, in which the libertine explains her approach to life: “is not happiness the ability to be amused by diversion?”(11). For the libertine, to be happy is to be entertained. We can understand some of the more perplexing behaviors of people if we realize that their underlying motivation is to divert and entertain themselves: “those who philosophize about it, and who think people are quite unreasonable to spend a whole day chasing a hare they would not have bought, scarcely know our nature.” People do not hunt because they want the kill, but rather, because hunting provides them with entertainment. Pascal argues that all men, even kings who are in “the finest position of the world,” are miserable, “if they are without what is called diversion” (12). The reason that we value diversion, explains Pascal, is because it allows us to avoid confronting all of the unpleasant features of our condition. We do not seek “easy and peaceful lives,” because those would force us to think about “our unhappy condition” (13). The “unhappy” quality of our condition is delineated in the believer’s reply to the libertine; the libertine asks whether happiness is not the ability to be amused by diversions, to which the believer replies, “No, because that comes from elsewhere and from outside, and thus it is dependent, and subject to be disturbed by a thousand accidents which cause inevitable distress” (14). All of the activities with which the libertine happily amuses herself are all highly contingent, and are made easily inaccessible by any number of factors that are necessarily out of the libertine’s control. Moreover, all of the libertine’s amusements are necessarily ephemeral, so that even if they are miraculously undisturbed by illness or accident, they will inevitably be disturbed by death. This is the primary source of the libertine’s inconsolable misery in Pascal’s conception—no matter how much happiness she derives from her activities in this world, her impending death constantly threatens to rob her of everything. As Pascal puts it, man “wants to be happy, wants only to be happy, and cannot want not to be so. But how will he go about it? The best way would be to render herself immortal, but since he cannot do this, he has decided to prevent himself from thinking about it” (15). Thoughts of mortality thwart the libertine’s ability to enjoy the world around, and so the libertine blocks out these thoughts with diversions. In Pascal’s example, the libertine hunts vigorously for a hare that he would never buy, because while “the hare does not save us from the sight of death...the hunt does” (16). All of this explains how Pascal can argue, in the spirit of Nietzsche, that valuing the happiness derived from diversions as the ideal of happiness falsely assumes that humans can find happiness in diversions. Pascal demonstrates that they cannot. Our diversions are inevitably “subjected to be disturbed by a thousand accidents, and this causes inevitable distress” (17). Crucially, the distress is inevitable ; even if we spend most of our time completely amused by diversions, the fact that our source of happiness is external and contingent puts us in a constant state of instability. We are rendered eternally dependent on factors beyond our control and are therefore powerless to console ourselves in the face of adversity unless the universe conspires to offer us diversion. We might wonder if Pascal’s case is overstated. Couldn’t the libertine seek happiness through something more substantial than a mere “diversion,” like, for example, self-fulfillment? I think that for Pascal the answer is no. This is because death robs any pursuit–even the pursuit of self-fulfillment–of enduring meaning. As Pascal puts it: “the final act is bloody, however fine the rest of the play. In the end, they throw some earth over our head, and that is it forever” (18). The libertine can only be satisfied if she does not think about the “final act” that will undermine “the rest of the play,” and because of this, all of her pursuits, even those that appear most meaningful, are really attempts to distract herself from this sobering fact. Pascal suggests that if the libertine actually confronted the truth of her condition, she would desist from all of her pursuits–even her desire for self-fulfillment–because they would no longer mean anything. That the libertine seeks to distract herself from the contingency of her condition with something that is itself contingent, is, I think, sufficient to undermine the libertine’s approach to life. But Pascal goes even deeper in exposing the problems with the libertine’s approach. He writes that, “The only thing that consoles us for our miseries is diversion, and yet this is the greatest of our miseries. For it is mainly what prevents us from thinking about ourselves, leading us imperceptibly to our ruin” (19). The libertine’s pursuit of diversions makes genuine self-knowledge impossible—if she is always distracting herself, she will never take the time to understand herself and her condition, and search for a more reliable and stable form of happiness. How can we say that someone is happier the more diverted they are, if someone who is diverted is also wholly alienated from herself? (20). It is this consideration that motivates Pascal’s famous observation that, “man’s unhappiness arises from one thing alone: that he cannot remain quietly in his room” (21). As Pascal sees it, diversion as source of true happiness–much like Nietzsche’s detached contemplation–is, indeed, a fiction. Pascal has induced a value crisis in the libertine by rendering what she previously valued—the amusements of earthly life—fundamentally meaningless. So what now? Left to live without diversion, Pascal explains, “we would be bored, and this boredom would lead us to seek a more solid means of escape” (22). I will argue that Pascal asking the seeking libertine to consider the possibility of an immortal soul is, in a certain sense, similar, to Nietzsche’s imagined demon presenting the possibility of eternal recurrence–i.e the doctrine that our live will be repeated infinitely many times into the future. Nietzsche presents this as a mere possibility , the consideration of which is nonetheless capable of inspiring an evaluative transformation in his readers (23). Entertaining the possibility of eternal recurrence hopefully inspires us to seek meaning in the lives that we are living on earth, rather than placing all of our hopes on a life after death. Analogously, before the wager, Pascal does not expect the libertine to believe in the immortal soul as a metaphysical fact , but he nonetheless presents it to her as an attractive possibility, powerful enough to reorient her life. If the possibility of an immortal soul isn’t even on her radar, then the wager argument cannot even get off the ground. But Pascal believes that considering this possibility will induce the libertine to seek God, the wager will then point out that doing so maximizes her expected utility, and eventually she will be certain of God’s existence (24). What makes the libertine’s condition so unhappy are all of the external threats that face her at every moment, the most debilitating of which is her own death (25). The libertine’s old approach was to avoid confronting this reality. As Pascal puts it, “as men are not able to fight against death...they have it into their heads, in order to be happy, not to think of them at all.”What Pascal offers the libertine is a solution that is truly sustainable: instead of valuing distractions from our mortality, we can value that which denies it altogether . We can reject that part of us that gets piled with dirt, since it can only make us unhappy, and instead we can embrace our immortal soul (26). Pascal presents this as a dazzling, metamorphic possibility, writing that “the immortality of the soul is something so important to us, something that touches us so profoundly, that we must have lost all feeling to be indifferent to knowing the facts of the matter” (27). Inspired by the possibility of an immortal soul, we are primed to be receptive to the wager, which tells us that if we want to maximize the expected outcome for our soul, we must gamble on God’s existence (28). If we now believe that it is through taking care of our immortal soul that we can transcend the misery of our bodily condition, the wager will indeed have a powerful pull on us. Inasmuch as the libertine’s challenge is escaping the misery of her contingent condition, Pascal presents the possibility of the immortal soul as a powerful alternative to the use of amusements and diversions. But is this alternative persuasive? The weakness in Pascal’s argument is noted by Sleinis in his analysis of Nietzsche’s parallel argument: “pure possibilities may have some capacity to exert pressure on our choices, but this capacity can in no way be equal to that of known actualities” (29). There is, however, a limit to how influential a mere possibility can be. If you know that a certain consideration that is motivating you to act, is only possibly true, then you won’t feel like you have a decisive reason to act. Pascal is confident that if we take the possibility of an immortal soul seriously, then we will eventually be led to believe it as an actuality. The problem, however, is whether we can take it seriously enough for this epistemic transformation to occur. This doesn’t mean that Pascal’s argument can not work at all, it just means that its practical success will likely be limited to libertines with certain psychological constitutions (i.e. it will be more persuasive to someone with a credulous disposition than to someone with a skeptical disposition). IV. Walter Kaufmann on Our Misery So far, we have seen that Pascal’s wager requires a certain evaluative shift on the part of the libertine, and that certain sections of the Pensées can be read as making an argument for that shift. But there is a weakness to part of this argument, namely, the plausibility that a mere possibility can inspire a dramatic revaluation. What I would like to consider, therefore, is an alternative response to the libertine’s crisis of value that would allow her to retain her current theoretical framework, but nonetheless allow her to transcend the apparent miseries of the human condition. We can read Kaufmann as addressing the libertine at the same stage that Pascal is—once she has accepted the futility of her diversions but does not know how else to cope with her unhappy condition—and arguing that the libertine can embrace her mortality rather than try to escape from it. Examining Kaufmann’s argument helps us to appreciate the way in which Pascal’s wager falls short as a straightforward appeal to the libertine’s self-interest. At most, the wager offers the libertine one way to escape her misery, but the libertine may find Kaufmann’s ideas more persuasive. While for Pascal, the libertine is unhappy if she is left to ponder her mortal condition, Kaufmann argues that this is not so; in fact, it is our mortality that renders our lives here worthwhile. The libertine considers herself miserable because she will not live in this worldforever, but Kaufmann urges her to consider how miserable she would be if she did . It's true that death is frightening for those who “fritter their lives away,” but “if one lives intensely, the time comes when sleep seems bliss” (30). Meaning, that if the libertine embraces all that this-life throws at him, then she will welcome death as a much-needed rest. One cannot live intensely forever. This argument might seem a bit problematic. After all, it is not clear why a simple good night’s sleep (or two) would not suffice for the one who lives intensely—why should she crave eternal sleep? The answer to this lies in the second argument that Kaufmann makes, namely, that without an eternal deadline we would not be able to live our lives as meaningfully. Our impending death offers a perspective that would otherwise be impossible. Kaufmann describes the way in which the threat of death motivates us to live vigorously: “the life I want is a life I could not endure in eternity. It is a life of love and intensity, suffering and creation, that makes life worthwhile and death welcome.” Death “makes life worthwhile” b ecause it encourages us to carve out lives that are indeed worthwhile. For example, “love can be deepened and made more intense and impassioned by the expectation of impending death,” meaning that our desire to be with someone we love is made all the more acute by our knowledge that we cannot be with them forever. When the libertine worries about the fact that she may one day lose her beloved, she need not retreat from these thoughts—either by seeking diversion or by entertaining the possibility of an immortal soul—but rather, as Kaufmann advises, she should embrace them. The fact that she may never see her beloved again is all the more reason for the libertine to express her love more eloquently and fervently than she ever would have if she was not worried about losing her beloved. It is not just that such intensity and p assion would be impossible to sustain in an infinite life, but rather that in an infinite life we could never achieve it in the first place. Death offers a perspective on life that, contrary to what Pascal argues, makes our lives in this world vibrant and precious. Pascal writes that, “As men have not been able to cure death, wretchedness, ignorance, they have decided, in order to be happy, not to think about those things” (31). But Kaufmann argues that it is precisely by thinking about her own death that the libertine can be inspired to live in a way that makes her happy. Perhaps this is why Ecclesiastes muses that “it is better to go to the house of mourning than to the house of feasting”—proximity to death provides the living with an invaluable lesson to truly “take to heart” (32). The libertine desperately avoids confronting her mortality, when in fact, thinking about death makes her life better right now: “one lives better” says Kaufmann, “when one expects to die,” and takes advantage of the time she has (33). This is not to deny the tragic reality that death often visits too early, but rather, to suggest that inasmuch as this is not always the case, we are, as philosopher Bernard Williams puts it, “lucky in having the chance to die” (34). Pascal might still counter that even if contemplating our death imbues our lives with urgency and significance, belief in the Christian afterlife also accomplishes this inasmuch as our conduct in this life determines how we fare in the next. But this argument will have no sway over the libertine at the stage of the argument at which we are now encountering him—when she does not yet believe in God. And what Kaufmann’s argument has demonstrated is that the libertine does not need to wager on God’s existence in order to live life meaningfully and passionately. While the Wager asked the libertine to revalue her values–which, as we have seen, is a non-trivial requirement–Kaufmann speaks directly to the evaluative commitments that the libertine already has. In a way, Kaufmann uses mortality in the same way that Pascal uses immortality: to redeem us from our misery by impressing upon us the urgency and significance of our lives. It’s true that Kaufmann and Williams don’t consider the possibility of an afterlife that is equally as exciting–if not more exciting–than earthly existence. There is, after all, no reason to assume that when we die we lose our ability to exercise agency. But the point is simply that they offer a way of seeing life on earth as meaningful regardless of what comes afterward. This is in sharp contrast with Pascal’s picture in which life on earth is miserable unless it is redeemed by belief in the afterlife. This is not to say that Pascal is wrong per sé; it is possible that Kaufmann would have lived a better life had he sought God and embraced religion. It is possible that he is currently b urning in the depths of hell, wishing his philosophical reasoning had taken a different turn. But this is of no consequence. What I am arguing is that Pascal is wrong to assume that the libertine’s mortality leaves her irredeemably miserable; Kaufmann offers an alternative perspective, whereby the libertine’s mortality is precisely what redeems her life and makes it worthwhile. Crucially, Kaufmann’s argument does not ask the libertine to entertain any theoretical p ossibilities like Pascal’s does, and it never requires that she make a wager of any sort. The libertine might still prefer Pascal’s argument, and therefore choose to see “the final act” as “bloody.” But as we have seen, she might choose to welcome death as a “blissful sleep.” And if Pascal cannot convince the libertine that mortal life is miserable, then he cannot get her into the evaluative mindset to be receptive to the wager. V. Conclusion The success of Pascal’s wager as an appeal to the libertine’s self-interest depends on his ability to convince the libertine to change her evaluative framework. At least at the outset, the possibility of an infinite life with God in heaven will repel rather than attract the libertine, giving her no reason to “wager all she has” (35). If we study the wager against the backdrop of Pascal’s broader apologetic project, however, we find the resources to persuade the libertine to “revalue her values.” This argument takes place in two stages. First, Pascal shows the libertine that the premium she places on amusements and entertainment falsely presupposes that they can truly make her happy. Pascal argues that they fail to do so, both because they are external—and therefore “subject to a thousand accidents”—and because they alienate the libertine from herself, making it impossible for her to discover what might truly make her happy. With the libertine’s evaluative framework thus dismantled, the inherent unhappiness of her condition becomes even more acute. Without diversions, she must confront the miserable fact of her mortality head-on. It is in this evaluative vacuum that Pascal offers her a new value that can save her from the misery of mortality: the immortal soul. At this stage of the argument, the libertine will not believe in the immortality of her soul as a metaphysical fact, but in considering this marvelous possibility, she will be encouraged to investigate it. And when Pascal tells her that her soul will fare best if she gambles on God’s existence, she will eagerly oblige. But this need not be the only way to save the libertine from the misery of mortality: Kaufmann suggests that the libertine should embrace and cherish her mortality because it is through the prism of her own death that her life becomes urgent and precious. This approach does not require an epistemic leap of faith like Pascal’s did; it simply requires the libertine to look at the fact of her life in a new light. The upshot is that for those who find themselves moved by Pascal’s polemic against diversions, but unmoved by her appeal to dubious metaphysical facts, there might be a more attractive solution. After he presents the libertine with her wager, Pascal urges that “there is no time to hesitate!” From what we have seen, however, there might be far too much of it. Endnotes: 1 This insight is due to Ian Hacking, quoted in: Hájek, Alan. “Pascal's Wager.” Stanford Encyclopedia of Philosophy , Stanford University, 1 Sept. 2017, plato.stanford.edu/entries/pascal-wager/. 2 While, as Hajek notes in her article, Pascal actually presents three different wager arguments, for the purposes of this paper, I will not discuss the correct interpretation/presentation of the wager. This is because my paper is not so much about the mechanics of the wager, but about the wager as a general strategy to inspire pragmatic commitment to God. 3 For the purposes of this paper, I adopt Pascal’s use of the term “libertine” to refer to his intended audience. This is partially for convenience, and partially meant to underscore that Pascal’s argument is addressed to a specific target audience and is not necessarily applicable to anyone who does not believe in God. As we will see throughout this paper, Pascal’s libertine has a very specific set of values and concern, which at times may even seem unrealistic. Inasmuch as Pascal sees himself as addressing this sort of person, however, this paper will assume that his observations are accurate, and analyze whether Pascal’s argument is successful on Pascal’s own terms. 4 All quotations in this paragraph come from: Pascal, Blaise, and Roger Ariew. Pensées. Indianapolis, IN: Hackett Pub. Co., 2005 pg. 212-13 (S680/L418). 5 Pascal actually argues that there are two things that the libertine desires: the true and the good. However, Pascal argues that we cannot know whether God exists, and therefore “your reason is no more offended by choosing one rather than the other.” Since the libertine only stands to gain in the realm of happiness, and not in the realm of truth (or at least not yet), I focus, for brevity, only on this claim. 6 This is a simplification. Pascal does not mention exactly how we ought to quantify the harm that will come to a non-believer if God exists. It is certainly possible that the harm will be infinite. And since this is the strongest way to formulate Pascal’s wager, I choose to present it this way. 7 The case of trying a new food is interesting in its own right. While it is beyond the scope of this paper to analyze this case, it is worth noting that it is unclear how one might weigh the value of trying a food and disliking it against the value of trying a food and liking it, since there are also different degrees of liking and disliking a food. But I think it is fair to assume that, having had the experience of eating foods that you’ve liked and disliked, you can have a rough sense of the maximum and minimum amount of pleasure that can be derived from eating a food. I would venture to say that trying a food that you love more than any food you have ever eaten, is still not a qualitatively different type of pleasure than eating a food that you really love. 8 Sleinis, E. E. Nietzsche's Revaluation of Values: A Study in Strategies. Urbana: University of Illinois Press, 1994, pg. 168. 9 Ibid. 10 As Ariew notes in his translation, “the word ‘diversion’suggests entertainment, but to divert literally means: “to turn away” or to mislead.” By using this word, Pascal makes his critique implicit from the beginning. 11 Pascal, S165/L132. 12 Quotations in this paragraph come from Pascal, S168/L136. 13 Ibid. 14 Pascal, S165/L132. 15 Pascal, S166/L134. 16 Pascal, S168/L136. 17 Pascal, S165/L132. 18 Pascal S197/L165. 19 Pascal, S33/L414. 20 The libertine says something in this spirit in Pascal, S165/L132. 21 Pascal S168/L136. 22 Pascal, S33/L414. 23 In some interpretations of Nietzsche, the eternal recurrence is actually presented as a metaphysical truth that we must believe in. Inasmuch as I am looking for an example that will parallel Pascal, however, I have chosen to discuss the interpretation that sees it as a pure possibility. 24 Evidence that Pascal believes those who are inspired by the possibility of an immortal soul and genuinely seek God as a result will come to have sure knowledge of her existence can be found in S681/L427. 25 This is not intended to summarize Pascal’s nuanced account of why we are wretched, but rather to encapsulate what it is that the libertine recognizes as “unhappy” about her condition: that is, all of the external factors that threaten her ability to enjoy diversions, the most intractable of which is death. 26 This might seem almost like a pre-wager-wager: wager on belief in an immortal soul, since it provides the potential for immortality rather than on the belief in a mortal soul, since this will lead to a life of misery. 27 Pascal S681/L427. 28 Of course, it is possible that there are other belief systems which include the notion of an immortal soul in an equally attractive way. This is similar to the well-known “many Gods objection” to Pascal’s wager, and while addressing it is not the subject of this paper, it is worth noting its presence. When I argue later on that the argument can work, I mean that, leaving other considerations such as this objection aside, it can work. 29 Sleinis, pg. 173. 30 Kaufmann, Walter, and Immanuel Velikovsky. The Faith of a Heretic. [1st ed.] Garden City, N.Y: Doubleday, 1961 , pg . 386. 31 Pascal S168. 32 Ecclesiastes 7:2. 33 Quotations in this paragraph come from Kaufmann, pg. 386. 34 Williams, Bernard. “The Makropulos Case: Reflections on the Tedium of Immortality.” Chapter. In Problems of the Self: Philosophical Papers 1956–1972 , 82–100. Cambridge: Cambridge University Press, 1973. 35 Pascal, S680/L418. Bibliography: Kaufmann, Walter, and Immanuel Velikovsky. The Faith of a Heretic. [1st ed.] Garden City, N.Y: Doubleday, 1961 . Pascal, Blaise, and Roger Ariew. Pensées. Indianapolis, IN: Hackett Pub. Co., 2005. Sleinis, E. E. Nietzsche's Revaluation of Values: A Study in Strategies. Urbana: University of Illinois Press, 1994. Williams, Bernard. “The Makropulos Case: Reflections on the Tedium of Immortality.” Chapter. In Problems of the Self: Philosophical Papers 1956–1972 , 82–100. Cambridge: Cambridge University Press, 1973. Previous Next

  • Douglas Beal

    Douglas Beal The Financial Case for Nations and Corporations to Put People and the Planet First Douglas Beal We are in a period of increasing societal disruption. Pressure is mounting to ad- dress the climate crisis. Racial equity issues have moved to the forefront. And the COVID-19 pandemic has caused untold suffering and death and upended economies around the globe. In the past, addressing such issues has been seen primarily as the responsibility of government. But increasingly, there are expectations that the private sector must play a leading role in driving progress on major societal challenges. I, along with my colleagues at Boston Consulting Group, have spent the last decade supporting nations and corporations in addressing social and environmental issues—and measuring how their efforts impact country GDP and company financial performance. My work in this area began with economic development, helping nations to advance in a way that improved the living standards of citizens. More recently I refocused on private sector work, helping companies and investors create strategies to deliver both business and societal value. The research and client work I’ve done in both areas reveal a powerful insight: whether one is talking about a country’s economic growth or a company’s prof- its or returns for shareholders, performance is not degraded by focusing on how decisions impact people and the planet. Rather, the evidence is mounting that integrating such factors into strategy enhances financial performance. Putting Well-Being at the Heart of a Nation’s Strategy BCG’s insight on these dynamics started with our work in the area of economic development. As we supported presidents and prime ministers around the world in honing their development strategies, it became clear they were looking for a way to measure their progress beyond the purely financial benchmark of GDP. This reflected their acknowledgement that robust GDP per capita growth in the short term means little if living standards are undermined in the long term (by poor health, underinvestment in education, a degraded environment, and a widening gap between rich and poor). The Sustainable Development Goals had not yet been put in place at this time, meaning a globally-recognized holistic framework for measuring country progress did not exist. We set out to create one. This led to some deep conversations about what really matters for a society. As Robert F. Kennedy said, GDP “measures everything in short, except that which makes life worthwhile.” We had to ask our- selves: What actually makes life worthwhile? We thought about general measures of happiness, for example, and whether levels of citizen happiness would be a good barometer for a nation’s performance. Ultimately, we decided that happiness would be too subjective for what we wanted to achieve. Instead we decided to focus on well-being , the conditions and quality of life people experience. We then asked ourselves: how do you measure well-being—and how can a government contribute to it? We spoke with numerous experts and dug into the re- search on well-being to determine what factors should comprise our measure. We eventually zeroed in on 10 dimensions: income, economic stability, employment, health, education, infrastructure, equality, civil society, governance, and environment. We identified a series of indicators for each—a total of 40 in all. The result was the Sustainable Economic Development Assessment, a diagnostic tool and measurement framework launched in 2012. SEDA allows us to track how a country’s well-being compares to that of other nations, determine the pace of progress over time, and identify areas in which countries are performing well or need to improve. SEDA revealed valuable insights. First, not surprisingly, countries with higher levels of wealth tended to have higher well-being. Norway, for example, has had the highest level of well-being relative to the rest of the world every year since we launched SEDA. Second, not all countries convert their wealth (GDP per capita) into well-being at equal rates. Some deliver well-being levels that are beyond what one would expect given the country’s wealth—and others deliver well-being far be- low what would be expected. In recent years Vietnam has been among the leading countries in terms of converting wealth into well-being—outpacing countries such as Germany, France, and the US on this metric. Third, inequality—and not just income inequality—has a major impact on well-being. Certainly, income inequality gets significant attention in political and media circles. But SEDA captures a broader view, assessing not only income inequality but also the lack of equity in access to health care and education as well. And our analysis last year found, somewhat surprisingly, that high levels of social inequality are a greater drag on well-being than high levels of income equality. Over the years, as we continued to assess country levels of well-being, public sector clients, journalists, and others often raised a similar question. While it was clear that countries with higher levels of wealth or growth had more resources to advance well-being, we were frequently asked if the reverse was true. So, was there evidence that countries with a better record on well-being ultimately posted more robust GDP growth? In 2018 we decided to take a stab at answering that question. By then we could access ten years’ worth of SEDA data—enough time to give us confidence we could identify a long-term trend if it existed. Drawing on data for all 152 countries in our data set, we looked at a country’s initial well-being performance relative to its wealth in the period leading up to and including the financial crisis (from 2007 through 2009)—and its growth rate in the decade that followed. We found that on average, countries that produced better well-being for their population given their level of wealth did in fact have a higher GDP growth rate in the future. Our analysis also found that countries that had a better record at delivering well-being for citizens were more resilient during the financial crisis, taking fewer months to recover to pre-crisis GDP levels than countries with weaker records on well-being. It turns out that taking care of people and the planet is good economics. Focusing on Total Societal Impact As we worked with nations on development strategies, we urged them to think strategically about integrating the private sector into those efforts. This included understanding where the country’s most pressing needs existed and identifying the industries and companies that could play a role in addressing those needs. Banks, for example, can be key partners in expanding access to capital for entrepreneurs. Food manufacturers that expand their supply chain to include small-holder farmers can help raise incomes for those individuals and reduce poverty rates overall. And biopharmaceutical companies that move to expand access to medicine can play a vital role in improving health outcomes. Time and time again my economic development work in the public sector rein- forced the importance of the private sector in advancing important societal issues. In 2016, I started focusing more on working directly with large multinational corporations to find ways to improve both business returns and their positive impact on society. At that time, academic research had shown that integrating environmental, social, and governance (ESG) performance into investment decisions led to better returns from a portfolio perspective. What that meant for individual businesses was not quite as clear. Most of our clients are large corporations—and they had a lot of questions. First, CEOs and CFOs were grappling with whether they should think of good ESG performance as a cost or an opportunity. They also wanted to understand what specific ESG topics were most important for their industry. So, we set out to prove that in fact ESG is an opportunity—not a cost—and to identify those topics that matter for specific industries. In 2017, I joined a group of colleagues in the Social Impact practice to conduct a detailed study of ESG performance in four industries: biopharmaceuticals, oil and gas, consumer packaged goods, and retail and business banking. We assessed company performance in dozens of ESG topics, such as ensuring a responsible environmental footprint or promoting equal opportunity. We looked for any correlation with market valuation multiples and margins. Our goal was to determine whether companies that excelled in those areas, enhancing what we call Total Societal Impact (TSI), saw a difference in financial performance versus companies that lagged in those ESG areas. Now, as members of the Social Impact practice, we were of course hoping we’d find a link. In fact, the results exceeded our expectations. Nonfinancial performance (as captured by the ESG metrics) has a statistically significant positive correlation with the valuation multiples of companies in all the industries we analyzed. In each industry, investors rewarded the top performers in specific ESG topics with valuation multiples 3% to 19% higher, all else being equal, than those of the median performers in those topics. And top performers in certain ESG topics had margins that were up to 12.4 percentage points higher, all else being equal, than those of the median performers in those topics. The bottom line: not only was there no penalty for focusing on ESG, but companies that performed well in critical ESG areas were rewarded in the market. The Moment of Truth Our work in SEDA and TSI were completely different—looking at different players, using different methodologies, and conducted at different times. Yet the results yielded strikingly parallel insights: putting people and the planet at the center of strategy improves financial performance. Those insights have major implications for nations and companies as they navigate the current period of turbulence and disruption. Certainly, it is too early to know which countries around the world will prove more resilient in the face of the pandemic. However, our research does support the view that those nations that design recovery strategies that support citizen well-being are likely to fare best. In particular, governments should design economic re- vitalization programs that don’t just position their nation for economic success in the future, but also ensure the benefits of any gains are equally shared among citizens. And those that created massive stimulus programs must leverage them as an opportunity to accelerate progress in fighting climate change. For companies, the imperative to transform in ways that create positive societal impact is equally strong. Companies should protect employees by ensuring work- place safety, while also reskilling workers and accelerating hiring where feasible. And as they transform their business in the face of the pandemic, they should integrate a societal impact lens into the effort. They can, for example, improve the resiliency of supply chains while also reducing carbon emissions and environmental impact. They can look for new product opportunities that yield real societal benefits. And they can partner with other companies or organizations to maximize impact. There are early indications that companies with a strong focus on their impact on society are faring better right now. Some key MSCI ESG indices, for example, have outperformed non-ESG benchmarks since the start of COVID-19. The challenges facing society today are grave—and daunting. But nations and corporations have massive leverage to move the needle against climate threat, racial inequity, and the devastating pandemic. Without their leadership, it is hard to see how we can make progress in any of these areas. Lucky for us, the evidence shows it is in their economic interest to do so. Previous Next

  • About Us | BrownJPPE

    Mission Statement Julian D. Jacobs '19 Daniel Shemano '19 Advisory Board Frequently Asked Questions CENTER FOR PHILOSOPHY, POLITICS, AND ECONOMICS Join jppe! The Brown University Journal of Philosophy, Politics, and Economics (JPPE) is a peer reviewed academic journal for undergraduate and graduate students that is sponsored by the Center for Philosophy, Politics, and Economics at Brown University. The JPPE aims to promote intellectual rigor, free thinking, original scholarship, interdisciplinary understanding, and global leadership. By publishing student works of philosophy, politics, and economics, the JPPE attempts to unite academic fields that are too often partitioned into a single academic discourse. In doing so, the JPPE aims to produce a scholarly product greater than the sum of any of its individual parts. By adopting this model, the JPPE attempts to provide new answers to today’s most pressing questions. Five Pillars of the JPPE 1.) Interdisciplinary Intellectualism: The JPPE is committed to engaging with an interdisciplinary approach to academics. By publishing scholarly work within the disciplines of philosophy, politics, and economics, we believe we are producing work that transcends the barriers of any given one field, producing a sum greater than its individual parts. 2.) Diversity: The JPPE emphasizes the importance of diversity in the articles we publish, authors we work with, and questions we consider. The JPPE is committed to equal opportunities and creating an inclusive environment for all our employees. We welcome submissions and job applicants regardless of ethnic origin, gender, religious beliefs, disability, sexual orientation, or age. 3.) Academic Rigor: In order to ensure that the JPPE is producing quality student scholarship, we are committed to a peer review process, whereby globally renowned scholars review all essays prior to publication. We expect our submissions to be well written, well argued, well researched, and innovative. 4.) Free Thinking and Original Arguments: The JPPE values free thinking and the contribution of original ideas. We seek excellent arguments and unique methods of problem solving when looking to publish an essay. This is one way in which JPPE is hoping to contribute to the important debates of our time. 5.) Global Leadership: By publishing work in philosophy, politics, and economics, we hope the JPPE will serve as a useful tool for future world leaders who would like to consider pressing questions in new ways, using three powerful lenses.

  • Predictive Algorithms in the Criminal Justice System: Evaluating the Racial Bias Objection

    Rebecca Berman Predictive Algorithms in the Criminal Justice System: Evaluating the Racial Bias Objection Rebecca Berman Increasingly, many courtrooms around the U.S. are utilizing predictive algorithms (PAs). PAs are an AI that assigns risk [of future offending] scores to defendants based upon various data about the defendant, not including race, to inform bail, sentencing, and parole decisions with the goals of increasing public safety, increasing fairness, and reducing mass incarceration. Although these PAs are intended to introduce greater objectivity to the courtroom by more accurately and fairly predicting who is most likely to commit future crimes, many worry about the racial inequities that these algorithms may perpetuate. Here, I scrutinize and subsequently support the claim that PAs can operate in racially biased ways, providing a strong ethical objection against their use. Then, I raise and consider the rejoinder that we should still utilize PAs because they are morally preferable to the alternative: leaving judges to their own devices. I conclude that the rejoinder adequately, but not conclusively, succeeds in rebutting the objection. Unfair racial bias in PAs is not sufficient grounds to outright reject their use, for we must evaluate the potential racial inequities perpetuated by utilizing these algorithms relative to the potentially greater racial inequities perpetuated without their use. The Racial Bias Objection to Predictive Risk Assessment ProPublica conducted research to support concerns that COMPAS (a leading predictive algorithm used in many courtrooms) is unfairly racially biased. Its re- search on risk scores for defendants in Florida showed: a. 44.9% of black defendants who do not end up recidivating are mislabeled as “high risk” (defined as a score of 5 or above), while only 23.5% of white defendants who do not end up recidivating are mislabeled as “high risk.” b. 47.7% of white defendants who end up recidivating are mislabeled as “low risk,” while only 28% of black defendants who end up recidivating are mislabeled as “low risk” (1). Intuitively, these findings strike us as an unfair racial disparity. COMPAS’s errors operate in different directions for white and black defendants: disproportionately overestimating the risk of black defendants while disproportionately underestimating the risk of white defendants. In “Measuring Algorithmic Fairness,” Deborah Hellman further unpacks the unfairness of this kind of racialized error rate disparity: First, different directions of error carry different costs. In the criminal justice system, we generally view false positives, which punishes an innocent person or over-punishes someone who deserves less punishment, as more costly and morally troublesome than false negatives, which fails to punish or under-punishes someone who is guilty. The policies and practices we have constructed in the U.S. system reflect this view. Defendants are innocent until proven guilty, and there is a high burden of proof for conviction. Because of this, the judicial system airs on the side of producing more false negatives than false positives. Given the widely accepted view that false positives (punishing an innocent person or over-punishing someone) carry a greater moral cost than false negatives (failing to punish or under-punish- ing a guilty individual) in the criminal justice system, we should be especially troubled by black defendants disproportionately receiving errors in the false positive direction (2). A black defendant mislabeled as “high risk” may very well lead judges to impose a much longer sentence or post higher bail than fair or necessary, a cost that black defendants would be shouldering disproportionately (in comparison to white defendants) given the error rate disparity produced by COMPAS. Second, COMPAS’s lack of error rate parity is particularly problematic due to its links to structural biases in data used by PAs. Mathematically, a calibrated algorithm will yield more false positives in the group with a higher base rate of the outcome being predicted. PAs act upon data that suggest a much higher base rate of black offending than white offending, and this base rate discrepancy can reflect structural injustices: I. Measurement Error: Black communities are over-policed, so a crime committed by a black person is much more likely to lead to an arrest than a crime committed by a white person. Therefore, the measured difference of offending between black and white offenders is much greater than the real (statistically unknowable) difference in offending between black and white offenders, and PAs unavoidably utilize this racially biased arrest data (3). II. Compounding Injustice: Due to historical and ongoing systemic racism, black Americans are more likely to live in conditions, such as poverty, certain neighborhoods, and low educational attainment, that correlate with higher predicted criminal behavior. Therefore, if and when PAs utilize criminogenic conditions as data points, relatively more black offenders will score “high risk” as a reflection of past injustices (4). To summarize, data reflecting unfair racial disparities are necessarily incorporated into COMPAS’s calculations, so unfair racial disparities will come out of COMPAS predictions. For all of these reasons—the high cost of false positives, measurement error, and compounding injustice—lack of error rate parity is a morally relevant attack on the fairness of COMPAS. By being twice as likely to label black defendants that do not end up re-offending as “high risk” than white defendants, COMPAS operates in an unfairly racially biased way. Consequently, we should not use PAs like COM- PAS in the criminal justice system. Rejoinder to the Racial Bias Objection to Predictive Risk Assessment The argument, however, is not that simple. An important rejoinder is based on the very reason why we find such tools appealing in the first place: humans are imperfect, biased decision-makers. We must consider the alternative to using risk tools in criminal justice settings: sole reliance on a human decision-maker, one that may be just as susceptible, if not more, to racial bias. Due to historical and continuing forces in the U.S. creating an association between dark skin and criminality and the fact that judges are disproportionately white, judges are unavoidably in- grained with implicit or even explicit bias that leads them to perceive black defendants as more dangerous than their white counterparts. This bias inevitably seeps into judges’ highly subjective decisions. Many studies of judicial decision-making show racially disparate outcomes in bail, sentencing, and other key criminal justice decisions (5). For example: a. Arnold, Dobbie, and Yang (2018) find, “black defendants are 3.6 percentage points more likely to be assigned monetary bail than white defendants and, conditional on being assigned monetary bail, receive bail amounts that are $9,923 greater” (6). b. According to the Bureau of Justice Statistics, “between 2005 and 2012, black men received roughly 5% to 10% longer prison sentences than white men for similar crimes, after accounting for the facts surrounding the case” (7). Consequently, the critical and challenging question is not whether or not PAs are tainted by racial biases, but rather becomes: which is the “lesser of two evils” in terms of racial justice: utilizing PAs or leaving judges to their own devices? I will argue the former, especially if we consider the long-term potential for improving our predictive decision-making through PAs. First, although empirical data on this precise matter is limited, we have reason to believe that utilizing well-constructed PAs can reduce racial inequities in the criminal justice system. Kleinberg et al. (2017) modeled New York City pre-trial hearings and found that “a properly built algorithm can reduce crime and jail populations while simultaneously reducing racial disparities” (8). Even though the ProPublica analysis highlighted disconcerting racial data, it did not compare decision-making using COMPAS to decisions made by judges without such a tool. Second, evidence-based algorithms present more readily available means for improvement than the subjective assessments of judges. Scholars and journalists can critically examine the metrics and their relative weights used by algorithms and work to eliminate or reduce the weight of metrics that are found to be especially potent in producing racially skewed and inaccurate predictions. Also, as Hellman suggests, race can be soundly incorporated into PAs to increase their overall accuracy because certain metrics can be distinctly predictive of recidivism in white versus black offenders. For example, “housing stability” might be more predictive of recidivism in white offenders than black offenders (9). If an algorithm’s assessment of this metric were to occur in conjunction with information on race, its overall predictions would improve, reducing the level of unfair error rate dis- parity (10). Furthermore, PAs’ level of bias is consistent and uniform, while the biases of judges are highly variable and hard to predict or assess. Uniform bias is easier to ameliorate than variable, individual bias, for only one agent of bias has to be tackled rather than an abundance of agents of bias. All in all, there appear to be promising ways to reduce the unfairness of PAs—particularly if we construct these tools with a concern for systemic biases—while there currently does not appear to be ready means to better ensure a judiciary full of systematically less biased judges. The question here is not “which is more biased: PAs or judges?” but rather “which produces more racially inequitable outcomes: judges utilizing PAs or judges alone?” Even if improved algorithms’ judgments are less biased than those of judges, we must consider how the human judge, who is still the final arbiter of decisions, interacts with the tool. Is a “high risk” score more salient to a judge when given to a black defendant, perhaps leading to continued or even heightened punitive treatment being disproportionately shown towards black offenders? Simultaneously, is a “low risk” score only salient to judges when given to a white defendant, or can it help a judge overcome implicit biases to also show more leniency towards a “low risk” black offender? In other words, does utilizing this tool serve to exacerbate, confirm, or ameliorate the perpetuation of racial inequity in judges’ decisions? Much more empirical data is required to explore these questions and come to more definitive conclusions. However, this uncertainty is no reason to completely abandon PAs at this stage, for PAs hold great promise for net gains in racial equity because we can and should keep working to overcome their structural flaws. In conclusion, while COMPAS in its current form operates in a racially biased way, this factor alone is not enough to forgo the use of PAs in the criminal justice system: we must consider the extent of unfair racial disparities perpetuated by tools like COMPAS relative to the extent of unfair racial disparities perpetuated when judges make decisions without the help of a tool like COMPAS. Despite PAs’ flaws, we must not instinctively fall back on the alternative of leaving judges to their own devices, where human cognitive biases reign unchecked. We must embrace the possibility that we can improve human decision-making by using ever-improving tools like properly crafted risk assessment instruments. Endnotes 1 ProPublica, “Machine Bias.” 2 Hellman, “Measuring Algorithmic Fairness,” 832-836. 3 Ibid, 840-841. 4 Ibid, 840-841. 5 National Institute of Justice, “Relationship between Race, Ethnicity, and Sentencing Outcomes: A Meta-Analysis of Sentencing Research.” 6 Arnold, Dobbie, and Yang, “Racial Bias in Bail Decisions,” 1886. 7 Bureau of Justice Statistics, “Federal Sentencing Disparity: 2005-2012,” 1. 8 Kleinberg et al., “Human Decisions and Machine Predictions,” 241. 9 Corbett-Davies et al., “Algorithmic Decision Making and the Cost of Fairness,” 9. 10 Hellman, “Measuring Algorithmic Fairness,” 865. Bibliography Angwin, Julia, Jeff Larson, Surya Mattu, Lauren Kirchner. “Machine Bias.” Pro- Publica. May 23, 2016. https://www.propublica.org/article/machine-bi- as-risk-assessments-in-criminal- sentencing. Arnold, Savid, Will Dobbie, Crystal S Yang. “Racial Bias in Bail Decisions.” The Quarterly Journal of Economics 133 , no. 4 (November 2018): 1885–1932. https://doi.org/10.1093/qje/qjy012. Bureau of Justice Statistics, “Federal Sentencing Disparity: 2005-2012.” 248768. October, 2015. https://www.bjs.gov/content/pub/pdf/fsd0512_sum.pdf. Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. “Algorithmic Decision Making and the Cost of Fairness.” In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining , pp. 797-806. 2017. Hellman, Deborah. “Measuring Algorithmic Fairness.” Virginia Public Law and Legal Theory Research Paper, no. 2019-39 (July 2019). Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mul- lainathan. “Human Decisions and Machine Predictions.” The Quarterly Journal of Economics 133, no. 1 (February 2018): 237–293. https://doi. org/10.1093/qje/qjx032. National Institute of Justice. “Relationship between Race, Ethnicity, and Sen- tencing Outcomes: A Meta-Analysis of Sentencing Research.” Ojmarrh Mitchell, Doris L. MacKenzie. 208129. December, 2004. https://www. ojp.gov/pdffiles1/nij/grants/208129.pdf. Acknowledgments I would like to thank Professor Frick and Masny for teaching the seminar “The Ethics of Emerging Technologies” for which I wrote this paper. Thank you for bringing my attention to this topic and Hellman’s paper and for helping me clarify my argument. I would like to thank my dad for helping me talk through ideas and providing feedback on my first draft of this paper. Previous Next

  • Sydney Bowen

    Sydney Bowen A “Shot” Heard Around the World: The Fed made a deliberate choice to let Lehman fail. It was the right one. Sydney Bowen On the morning of September 15, 2008, the DOW Jones Industrial Average plunged more than 500 points; $700 billion in value vanished from retirement plans, government pension funds, and investment portfolios (1). This shocking market rout was provoked by the bankruptcy filing of Lehman Brothers Holding Inc., which would soon become known as “the largest, most complex, most far-reaching bankruptcy case” filed in United States history (2). Amid job loss, economic turmoil, and choruses of “what ifs,” a myriad of dangerous myths and conflicting stories emerged, each desperately seeking to rationalize the devastation of the crisis and explain why the Federal Reserve did not extend a loan to save Lehman. Some accuse the Fed of making a tragic mistake, believing that Lehman’s failure was the match that lit the conflagration of the entire Global Financial Crisis. Others disparage the Fed for bowing to the public’s political opposition towards bailouts. The Fed itself, however, adamantly maintains that they “did not have the legal authority to rescue Lehman,” an argument played in unremitting refrain in the years following the crisis. In this essay, I discuss the various dimensions of the heated debate on how and why the infamous investment bank went under. I examine the perennial question of whether regulators really had a choice in allowing Lehman to fail, an inquiry that prompts the multi-dimensional and more subjective discussion of whether regulators made the correct decision. I assert that (I) the Fed made a deliberate, practical choice to let Lehman fail and posthumously justified it with a façade of legal inability, and that (II) in the context of the already irreparably severe crisis, the fate of the future financial landscape, obligations to taxpayers, and the birth of the landmark legislation TARP, the Fed made the ‘right’ decision. I. The Fed’s Almost Rock-Solid Alibi: Legal Jargon and Section 13(3) Fed Chairman Ben Bernanke, Former Treasury Secretary Hank Paulson, and New York Fed general counsel Thomas Baxter Jr. have each argued in sworn testimony that regulators wanted to save Lehman but lacked the legal authority to do so. While their statements are not lies, they neglect to tell the entire – more incriminating – truth. In this section, I assert that Fed officials deliberately chose not to save Lehman and justified their decision after the fact with the impeccable alibi that they did not have a viable legal option. In a famous testimony, Bernanke announced, “ [T]he only way we could have saved Lehman would have been by breaking the law, and I’m not sure I’m willing to accept those consequences for the Federal Reserve and for our system of laws. I just don’t think that would be appropriate ”(3). At face value, his argument appears sound; however, the “law” alluded to here– Section 13(3) of the Federal Reserve Act–was not a hard and fast body of rules capable of being “broken,” but rather a weakly worded, vague body that encouraged “regulatory gamesmanship and undermined democratic accountability” (4). i. Section 13(3) Section 13(3) of the Federal Reserve Act gives the Fed broad power to lend to non-depository institutions “in unusual and existent circumstances” (5). It stipulates that a loan must be “secured to the satisfaction of the [lending] Reserve Bank,” limiting the amount of credit that the Fed can extend to the value of a firm’s col- lateral in an effort to shield taxpayers from potential losses (6). Yet, since the notion of “satisfactory security” has no precise contractual definition, Fed officials had ample room to exercise discretionary judgment when appraising Lehman’s assets. This initial legal freedom was further magnified by the opaqueness of the assets themselves – mortgage-backed securities, credit default swaps, and associated derivatives were newfangled financial instruments manufactured from a securitization process, complexly tranched and nearly impossible to value. Thus, the three simple words, “secured to satisfaction,” provided regulators with an asylum from their own culpability, allowing them to hide a deliberate choice inside a comfort- able perimeter of legal ambiguity. ii. Evaluations of Lehman’s Assets and “Secured to Satisfaction” The “legal authority” to save Lehman hinged upon the Fed’s conclusions on Lehman’s solvency and their evaluation of the firm’s available collateral–a task that boiled down to Lehman’s troubled and illiquid real-estate portfolio, composed primarily of mortgage-backed securities. Lehman had valued their portfolio at $50 billion, purporting a $28.4 billion surplus; however, Fed officials and potential private rescuers, skeptical of Lehman’s real-estate valuation methods, argued that there was a gaping “hole” in their balance sheet. Bank of America, a private party contemplating a Lehman buyout, maintained that the size of the hole amounted to “$66 billion” while the Fed’s task team of Goldman Sachs and Credit Suisse CEO’s determined that “tens of billions of dollars were missing” (7). Esteemed economist Lawrence Ball, who meticulously reviewed Lehman’s balance sheet, however, concluded to the contrary–there was no “hole” and Lehman was solvent when the Fed allowed it to fail. While I do not claim to know which of the various assessments was correct, the simple fact remains–the myriad of conflicting reports speak to the ultimate subjectivity of any evaluation. “Legal authority” became hitched to the value of mortgage-backed securities, and in 2008 their value had become dangerously opaque. In discussing the Fed’s actions, it is necessary to point out that the Federal Reserve has a rare ability to value assets more liberally than a comparable private party–they are able to hold distressed assets for longer and ultimately exert incredible influence over any securities’ final value as they control monetary policy. The Dissenting Statement of the FCIC report aptly reveals that Fed leaders could have simply guided their staff to “re-evaluate [Lehman’s balance sheet] in a more optimistic way to justify a secured loan;” however, they elected not to do so since such action did not align with their private, practical interests (8). The “law” could have been molded in either direction–the Fed consciously chose the direction of nonintervention just as easily as they could have chosen the opposite. iii. The Fed’s “Practical” and Deliberate Choice Section 13(3) had been invoked just five months earlier in March 2008, when the Fed extended a $29 billion loan to facilitate JP Morgan’s purchase of a differ- ent failing firm, Bear Stearns. In an effort to separate the Fed’s handling of Bear Stearns from Lehman, Bernanke admits that considerations behind each decision were both “ legal and practical ” (9). While in Bear Stearns case, practical judgement weighed in favor of intervention, in Lehman’s case, it did not: “if we lent the money to Lehman, all that would happen would be that the run [on Lehman] would succeed, because it wouldn’t be able to meet the demands, the firm would fail, and not only would we be unsuccessful, but we would [have] saddled the taxpayer with tens of billions of dollars of losses” (10). While an exhaustive display of arguments and testimonies that challenge the Fed’s claim of legal inability is cogent, perhaps the most chilling evidence lies in an unassuming and incisive question: “Since when did regulators let a lack of legal authority stop them? There was zero legal authority for the FDIC’s broad guarantee of bank holding debt. Saving Lehman would have been just one of many actions of questionable legality taken by regulators” (11). iv. Other Incriminating Facts: The Barclay’s Guarantee and Curtailed PDCF Lending An analysis of Lehman’s failure would be incomplete without discussing the Fed’s resounding lack of action during negotiations of a private rescue with Barclays, a critical moment in the crisis that could have salvaged the failing firm with- out contentious use of public money. Barclays began conversing with the U.S. Treasury Department a week prior to Lehman’s fall as they contemplated and hammered out terms of an acquisition (12). The planned buyout by the British bank would have gone through had the Fed agreed to guarantee Lehman’s trading obligations during the time between the initial deal and the final approval; yet, the Fed deliberately refused to intervene, masking their true motives behind a legal inability to offer a “‘naked guarantee’–one that would be unsecured and not limit- ed in amount” (13). However, since such a request for an uncapped guarantee never occurred, the Fed’s legal alibi is deceitfully misleading. In truth, Lehman asked for secured funding from the Fed’s Primary Dealer Credit Facility (PDCF), a liquidity window allowing all Wall Street firms to take out collateralized loans when cut off from market funding (“The Fed—Primary Dealer Credit Facility (PDCF),” n.d.). While Lehman would not have been able to post eligible collateral under the initial requirement of investment-grade securities, they likely would have been able to secure a loan under the expanded version of the program that accepted a broader range of collateral. The purposeful curtailment of the expanded collateral to Lehman is one of the most questionable aspects of the Lehman weekend, and is perhaps the most lucid evidence that the Fed made a deliberate choice to let the firm fail. The FCIC de- tails the murky circumstances and clear absence of an appropriate explanation for the act: “the government officials made it plain that they would not permit Lehman to borrow against the expanded types of collateral, as other firms could. The sentiment was clear, but the reasons were vague” (14). If there had been a rational ex- planation, regulators would have articulated it. Instead, they merely repeated that “there existed no obligation or duty to provide such information or to substantiate the basis for the decision not to aid or support Lehman” (15). The Fed’s refusal to provide PDCF liquidity administered the final nail in Lehman’s coffin–access to such a loan made the difference in Lehman being able to open for business that infamous morning. v. An Intriguing Lack of Evidence The Fed did not furnish the FCIC with any analysis to show that Lehman lacked sufficient collateral to secure a loan under 13(3), referencing only the estimates of other Wall Street firms and declining to respond to a direct request for “the dollar value of the shortfall of Lehman’s collateral relative to its liquidity needs” (16). Diverging from typical protocol, where the Fed’s office “wrote a memo about each of the [potential] loans under Section 13(3),” Lehman’s case contains no official memo. When pressed on this topic, Scott Alvarez, the General Counsel of the Board of Governors of the Federal Reserve, rationalized the opportune lack of evidence as an innocuous judgement call: “folks had a pretty good feeling for the value of Lehman during that weekend, and so there was no memo prepared that documented why it is we didn’t lend... they understood from all of [the negotiations] that there wasn’t enough there for us to lend against and so they weren’t willing to go forward” (17). While this absence of evidence does not prove that the Fed had access to a legal option, it highlights a disconcerting and suggestive vacancy in their claims. Consider an analogous courtroom case where a defendant exercises the right to remain silent rather than respond to a question that may implicate them–similarly, the Fed’s intentional evasion of the request for concrete evidence appears an incriminating insinuation of guilt. The lack of “paper trail” becomes even more confounding when coupled with the Fed’s inconsistent and haphazard statements justifying their decision. Only after the initial praise for the decision soured into a surge of public criticism did any mention of legality enter the public record. Nearly three weeks after Lehman’s fall on October 7th, Bernanke introduced a strategic “alibi:” “Neither the Treasury nor the Federal Reserve had the authority to commit public money in that way” (18). Bernanke insists that he will “maintain until [his] deathbed that [they] made every effort to save Lehman, but were just unable to do so because of a lack of legal authority” (19). However, when considering the subjectivity of “reasonable assurance” of repayment, the malleability of “legal authority,” and the convenient lack of evidence to undermine his statement, Bernanke’s “dying” claim becomes comically hollow. If the Fed had truly made “every effort” to rescue Lehman, they would have relied on more than a “pretty good feeling”–had they truly been sincere, the Federal Reserve, a team of seasoned economists, would have used hard numerical facts as guidance for a path forward. vi. The Broader Implications of “Secured to Satisfaction:” a Logical Fallacy While the Fed’s lack of transparency is unsettling, perhaps the most unnerving aspect of the entire Lehman episode is the precarious regulatory framework that the American financial system trusted during a crisis. The concept of “secured to satisfaction” is not the bullet-proof legal threshold painted by the media, rather it was a malleable moving target molded by the generosity of the Fed’s estimates and the fluctuating state of the economy, instead of precise mathematical facts. A 2018 article by Columbia Law Professor Kathryn Judge exposes the logical fallacy of Section 13(3)’s “secured to satisfaction,” citing how “subsequent developments can have a first order impact on both the value of the assets accepted as collateral and the apparent health of the firms needing support” (20). The “legal authority” of regulators to invoke Section 13(3) is a circular and empty concept, hitched to nebulous evaluations of complex and opaque securities, assets that were not only inherently hard to value but whose valuations could later be manipulated. By adjusting the composition of their balance sheet (Open Market Operations) and altering interest rates, the Fed guides the behavior of financial markets, thus subtly inflating (or deflating) the value of a firm’s collateral (21). Indeed, in the years following the government’s support of Bear Stearns and AIG, the Fed’s aggressive and novel monetary policy (close to zero interest rates and a large-scale program of quantitative easing) may have been “critical to making the collateral posted by [Bear Stearns and AIG] seem adequate to justify the central bank’s earlier actions’’ (22). Using collateral quality and solvency as prerequisites for lawful action is inherently problematic, since a firm’s health and the quality of their collateral are not factors given exogenously–they are endogenous variables that regulators them- selves play a critical role in determining. Thus, acceptance of the narrative that Lehman failed because the Fed lacked any legal authority to save it would be a naive oversight. Rather, Lehman failed because the Fed lacked the practical and political motivations to exploit the law. II. The Right Choice As Lehman’s downfall is both a politically contentious and emotionally charged topic, it is necessary to approach the morality of the Fed’s decision with sympathy and caution. In the following sections, I intend to illustrate why regulators made the right decision in allowing Lehman to fail by using non-partisan facts organized around four key arguments . (1) Lehman was not the watershed event of the Crisis. The market panic follow- ing September 2008 was a reaction to a collection of unstoppable, unrelated, and market-shaking events. (2) Lehman’s failure expunged the hazardous incentives carved into the financial landscape prior. Policymakers shrewdly chose long-term economic order over the short-term benefit of keeping a single firm afloat. (3) Failure was the “right” and only choice from a taxpayer’s perspective. (4) Lehman’s demise was a necessary catastrophe, creating circumstances so parlous that Congress passed TARP, landmark legislation that gave the Federal Reserve the authority that ultimately revived the financial system. (1) Lehman Was Not the Watershed Event of the Crisis For many people, the heated debate over whether regulators did the right thing in allowing Lehman to fail is synonymous with the larger question: “would rescuing Lehman have saved us from the Great Recession?” In the following section, I assert that Lehman was not the defining moment of the Financial Crisis (as is often construed in the media); rather, the global financial turmoil was irreversibly underway by September 2008 and the ensuing disaster could not have been simply averted by Lehman’s rescue. “ The problem was larger than a single failed bank – large, unconnected financial institutions were undercapitalized because of [similar, failed housing bets] ” (23). By Monday September 15, Bank of America had rescued the deteriorating Merrill Lynch and the insurance giant AIG was on the brink of failure–a testament to the critical detail that many other large financial institutions were also in peril due to losses on housing-related assets and a subsequent liquidity crisis. Indeed, in the weeks preceding Lehman’s failure, the interbank lending market had virtually froze, plunged into distress by a contagious spiral of self-fulfilling expectations. Unable to ascertain the location and size of subprime risk held by counterparties in the market, investors became panicked by the obscured and so ubiquitous risk of housing exposure, precipitously cutting off or restricting funding to other market participants. This perceived threat of a liquidity crisis triggered the downward spiral of the interbank lending market in the weeks preceding Lehman’s fall, a market which pumped vital cash into nearly every firm on Wall Street. The LIBOR-OIS spread, a proxy for counterparty risk and a robust indicator of the state of the interbank market, illustrates these “illiquidity waves” that severely impaired markets in 2008 (24). (Sengupta & Tam, 2008). As shown in the figure below, in the weeks prior to the failure of Lehman Brothers, the spread spiked dramatically, soaring above 300 basis points and portraying the cascade of panic and contraction of lending standards in the interbank market. The idea that Lehman was the key moment in the crisis might be accurate if nothing of significance happened before its failure; however, as I outline below this was clearly not the case. The quick succession of events occurring in September 2008 – events which would have occurred regardless of Lehman’s failure – triggered the global financial panic. A New Yorker article publishing a detailed timeline of the weekend exposes how AIG’s collapse and near failure was completely uncorrelated to Lehman (25). On Saturday September 13, AIG’s “looming multi-billion-dollar shortfall” from bad gambles on credit default swaps became apparent. Rescuing AIG became a top priority throughout the weekend, and on Tuesday, the day after Lehman filed for bankruptcy protection, the Fed granted an $85 billion emergency loan to salvage AIG’s investments (26). Given the curious timing, AIG’s troubles are often chalked up to be a market reaction to Lehman’s failure; however, proper facts expose the failures of AIG and Lehman as merely a close succession of unfortunate, yet unrelated events. In a similar light, the failure and subsequent buyouts of Washington Mutual (WaMu) and Wachovia, events that further rocked financial markets and battered confidence, would have occurred regardless of a Lehman bailout. Both commercial banks were heavily involved in subprime mortgages and were in deep trouble before Lehman. University of Oregon economist Tim Duy asserts that, even with a Lehman rescue, “the big mortgage lenders and regional banks [ie. WaMu and Wachovia] that were more directly affected by the mortgage meltdown likely wouldn’t have survived” (27). The financial system was precariously fragile by the fall of 2008 and saving Lehman would not have defused the larger crisis or ensuing market panic that erupted after September 2008. Critics of the Fed’s decision often cite how the collapse of Lehman Brothers be- gat the $62 billion Reserve Primary Fund’s “breaking of the buck” on Thursday, September 18 and precipitated a $550 billion run on money-market funds. Lehman’s dire effect on money and commercial paper markets is irrefutable; however, arguments that Lehman triggered this broader global financial panic neglect all relevant facts. The Lehman failure neither froze nor would a Lehman rescue have unfrozen credit markets, the key culprit responsible for the escalation and depth of the Crisis (28). Credit markets did not freeze in 2008 because the Fed chose not to bailout Lehman–they froze because of the mounting realization that mortgage losses were concentrated in the financial system, but nobody knew precisely where they lay. It was this creeping, inevitable realization, amplified by Lehman and the series of September events, that caused financial hysteria (29). As Geithner explains, “Lehman’s failure was a product of the forces that created the crisis, not the fundamental cause of those forces” (30). The core problems that catalyzed the financial market breakdown were an amalgamation of highly leveraged institutions, a lack of transparency, and the rapidly deteriorating value of mortgage-related assets–bailing out Lehman would not have miraculously fixed these problems. While such an analysis cannot unequivocally prove that regulators made the right decision in choosing to let Lehman fail, it offers a step in the right direction–the conventional wisdom that Lehman single-handedly triggered the collapse of confidence that froze credit markets and caused borrowing rates for banks to skyrocket is unfounded. While I have argued above that Lehman’s bankruptcy was not the sole trigger of the crisis, it was also not even the largest trigger. Research by Economist John Taylor asserts that Lehman’s bankruptcy was not the divisive event peddled by the media–using the LIBOR spread (the standard measure for market stress), Taylor found that the true ratcheting up of the crisis began on September 19, when the Fed revealed that they planned to ask Congress for $700 billion to defuse the crisis (31). Arguments advanced by mainstream media that saving Lehman would have averted the recession are naively optimistic and promote a dangerously inaccurate narrative on the events of 2007–2009. The failure of Lehman did indeed send new waves of panic through the economy; however, Lehman was not the only disturbance to rock financial markets in September of 2008 (32). This latter fact is of critical importance. (2) Lehman’s Collapse Caused Inevitable and Necessary Market Change “The inconsistency was the biggest problem. The Lehman decision abruptly and surprisingly tore the perceived rule book into pieces and tossed it out the window.” –Former Vice Chairman to the Federal Reserve Alan Blinder (33). Arguments that cite the ensuing market panic and erosion of confidence that erupted after Lehman’s failure are near-sighted and fail to appreciate the larger picture motivating policy makers’ decision. Regulators’ decision not to rescue the then fourth largest investment bank, an institution assumed “too big to fail,” dispensed a necessary wake-up call to deluded and unruly Wall Street firms, which had been lulled into a costly false sense of security. The question of whether regulators did the right thing in allowing Lehman to fail cannot be studied in a vacuum; it must be considered alongside the more consequential question of whether regulators made the right decision in saving Bear Stearns. In 2007, the Fed’s extension of a $29 billion loan to Bear Stearns rewrote the tacit rules that had governed the political and fiscal landscape for centuries, substantiating the notion that institutions could be “too big or too interconnected to fail.” The comforting assumption that regulators would intervene to save every systemically important institution from failure was a turning point in the crisis, “setting the stage for [the financial carnage] that followed” (34). After the Bear Stearns intervention, regulators faced a formidable and insuperable enemy: the inexorable march of time. It would be an unsustainable situation for the government to continue bailing out every ailing financial firm. “These officials would have eventually had to say ‘no’ to someone, sometime. The Corps of Financial Engineers drew the line at Lehman. They might have been able to let the process run a few weeks more and let the bill get bigger, but ultimately, they would have had to stop. And when they did expectations would be dashed and markets would adjust. If Lehman had been saved, someone else would have been allowed to fail. The only consequence would be the date when we commemorate the anniversary of the crisis, not that the crisis would have been forever averted. ” (35). The Lehman decision corrected the costly market expectations created by Bear Stearns’ rescue and restored efficiency and discipline to markets. Throughout the crisis, policymakers, unable to completely avoid damage, were forced to decide which parties would bear losses. Lehman’s demise was a reincarnation and emblem of their past decisions–their precedent of taxpayer burden had further encouraged Wall Street’s excessive leverage and reckless behavior (36). Saving Lehman would have simply hammered these skewed incentives further into markets, putting the long-term stability and structure of capitalist markets at risk. Taxpayers would have been forced to foot a bill regardless of the Fed’s final decision: if not directly through a bailout, then indirectly through layoffs and economic turmoil (37). Instead of saddling taxpayers with the lingering threat of a large bill in the future, the Fed made the prudent and far-sighted decision to hand them a smaller bill today. The Fed heeded the wisdom of the age-old adage, “better the devil you know than the devil you don’t.” Put simply, the economic “calculus” of policymakers was correct. While rescuing Lehman may have seemed tantalizing at the time, the long-term costs would have been far more consequential than the short-term benefits (38). Political connotations often accompany this argument, evocative of what some have christened the Fed’s “painful yet necessary lesson on moral hazard;” however, partisan beliefs are extraneous to the simple, economic facts of the matter. From a fiscal perspective, policymakers made the right choice to let Lehman fail by shrewdly choosing long-term economic order over short-term benefits. (3) The Right Decision from a Taxpayers’ Perspective Given financial markets’ complete loss of confidence in Lehman and the unnervingly fragile state of the economy, an attempt at a Lehman rescue (within or above the law) would not only have been a fruitless, but also a seriously unjust use of taxpayer dollars. The health of an investment bank hinges upon the willingness of customers and counterparties to deal with it, and according to former Secretary Geithner, “that confidence was just gone” (39). By the weekend, the market had already lost complete confidence in Lehman: “no one believed that the assets were worth their nominal value of $640 billion; a run on its assets was already underway, its liquidity was vanishing, and its stock price had fallen by 42% on just Friday September 12th; it couldn’t survive the weekend” (40). For all practical purposes, the markets had sealed Lehman’s fate and a last-minute government liquidity line could have done nothing to change it. In testimony, Bernanke aptly characterizes a loan to supplant the firms’ disappearing liquidity as a prodigal expenditure, “merely wast- ing taxpayer money for an outcome that was unlikely to change” (41). After the fallout of the Barclays deal, many experts have argued that the Fed should have provided liquidity support during a search for another buyer, since temporary liquidity assistance from the government might have extinguished the escalating crisis. However, such an open-ended government commitment that allowed Lehman to shop for an “indefinite time period” would have been an absurd waste of public money (42). If the Fed had indeed provided liquidity aid up to some generous valuation of Lehman’s collateral, “the creditors to Lehman could have cashed out 100 cents on the dollar, leaving taxpayers holding the bag for losses” (43). The loan would not have prevented failure, but only chosen which creditors would bear Lehman’s losses at the expense of others. On September 15, “Lehman [was] really nothing more than the sum of its toxic assets and shattered reputation as a venerable brokerage”(44). It would have been an egregious abuse of the democratic tax system if the government were to bail out Lehman, leaving the public at the whims of the fragile financial markets and saddling them with an uncapped bill for Wall Street’s imprudence. While virulent rumors of Lehman’s failure as political save-face by regulators may prevail in mainstream media, I maintain that the Fed’s deci- sion was the right one for the American public (45). (4) TARP: Lehman Begat the Legislation that Revived the Financial System In considering the relative importance of Lehman as the cause of the crisis, scholars must also consider the more nuanced and hard-hitting counterpart: “How important was Lehman as a cause of the end of the Crisis? ” While in the context of the suffering caused by the Great Recession and the polarizing rhetoric of “bailing out banks,” this question is politically unpopular; I broach it nonetheless, since it is an important facet of the debate on whether regulators made the “right decision.” Lehman’s failure was vitally important to the end of the Crisis–it allowed the Troubled Asset Relief Program (TARP) to pass Congress, a critical piece of legislation that equipped regulators with the tools ultimately necessary to repair the financial system (46). Every previous effort of the Fed (creating the PDCF, rescuing Bear Stearns, the conservatorship of Fannie and Freddie) was not enough to salvage the deteriorating financial system–by September 2008 “Merrill Lynch, Lehman, and AIG were all at the edge of failure, and Washington Mutual, Wachovia, Goldman Sachs, and Morgan Stanley were all approaching the abyss” (47). The Fed needed the authority to inject capital into the financial system, and as described in Naomi Klein’s The Shock Doctrine , Lehman’s unexpected fall acted as the final catastrophic spark necessary to “prompt the hasty emergency action involving the relinquishment of rights and funds that would otherwise be difficult to pry loose from the citizenry” (48). With authority to inject up to $700 billion of capital into suffering non-bank institutions, TARP preserved the crumbling financial system by inspiring them to lend again. The government offered $250 billion in capital to the nine most systemically important institutions, and used $90 billion in TARP financing to save the teetering financial giants, Bank of America and Citigroup (49). Exactly how much credit TARP deserves for averting financial catastrophe is unclear, yet the fact remains that coupled with Geithner’s Stress Tests, TARP helped stop the county’s spiral into what could have been a crisis as dire as the Great Depression. IV. Conclusion In this essay, I have shown that the Fed exploited the vagueness of Section 13(3) to ad- vance their political, economic, and moral agenda to let Lehman fail, and asserted that policymakers made the right choice in allowing Lehman to fail (weighing economic facts, the implications of future economic landscape, taxpayers’ rights, and the passage of land- mark legislation). It may have been easier for regulators to hide behind legal jargon and technicalities than to defend the economic rationale and practicality of their onerous decision to an audience of distressed Americans; however, this ease is not without the costs of continued confusion, misleading conventional wisdom, and bitter citizenry. Lehman’s bankruptcy will forever be synonymous with the financial crisis and (resulting) wealth destruction.” -Paul Hickey, founder of Bespoke Investment Group (50). Lehman’s failure left an indelible mark in history and a tireless refrain of diverging and potent emotions towards regulators: contempt for the Fed that “triggered the Crisis,” disdain for the government that bailed out Wall Street with TARP, and hatred of impressionable leaders who “bowed” to political pressure. It is indeed easier to accept a visceral and tangible moment like Lehman’s failure as a cause of suffering than the nihilistic and elusive fact that the buildup of leverage and the burst of the housing bubble caused the crisis. However, it is not enough for only academics and policymakers to understand that “Lehman’s failure was a product of the forces that created the crisis, not a fundamental cause of those forces” (51). Conventional wisdom must be rewritten for the sake of faith in the government and the prevention of future crises. Our acceptance of why Lehman was allowed to die must move beyond the apportioning of responsibility or the distribution of reparations–we must redirect the futile obsession over the legality and morality of the Fed’s decision towards the imbalances in the financial system that caused the Crisis to begin with. Endnotes 1 Public Affairs, The Financial Crisis Inquiry Report, 340. 2 Ibid. 3 Clark, “Lehman Brothers Rescue Would Have Been Unlawful, Insists Bernanke.” 4 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 5 Fettig, The History of a Powerful Paragraph. 6 Ball, The Fed and Lehman Brothers, 5. 7 Stewart, Eight Days. 8 Public Affairs, The Financial Crisis Inquiry Report, 435. 9 Public Affairs, The Financial Crisis Inquiry Report, 340. 10 Ibid. 11 Calabria, “Letting Lehman Fail was a Choice, and It Was the Right One.” 12 Chu, “Barclays Ends Talks to Buy Lehman Brothers.” 13 Ball, The Fed and Lehman Brothers. 14 Public Affairs, The Financial Crisis Inquiry Report, 337. 15 Ball, The Fed and Lehman Brothers, 141. 16 Ibid, 11. 17 Ibid, 133. 18 J.B. Stewart and Eavis, “Revisiting the Lehman Brothers Bailout that Never Was.” 19 Ibid. 20 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 21 Tarhan, “Does the federal reserve affect asset prices? 22 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 23 Public Affairs, The Financial Crisis Inquiry Report, 433. 24 Sengupta & Tam. 25 J.B. Stewart, “Eight Days.” 26 Public Affairs, The Financial Crisis Inquiry Report, 435. 27 O’Brien, “Would saving Lehman have saved us from the Great Recession?” 28 Ibid. 29 Public Affairs, The Financial Crisis Inquiry Report, 436. 30 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 31 Skeel, “History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong.” 32 Public Affairs, The Financial Crisis Inquiry Report, 436. 33 J.B. Stewart and Eavis, “Revisiting the Lehman Brothers Bailout that Never Was.” 34 Skeel, “History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong.” 35 Reinhart, “A Year of Living Dangerously: The Management of the Financial Crisis in 2008.” 36 Ibid. 37 Antoncic, “Opinion | Lehman Failed for Good Reasons.” 38 Reinhart, “A Year of Living Dangerously: The Management of the Financial Crisis in 2008.” 39 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 40 J.B. Stewart, “Eight Days.” 41 Public Affairs, The Financial Crisis Inquiry Report, 435. 42 Ibid. 43 Ibid. 44 Grunwald, “The Truth About the Wall Street Bailouts.” 45 Erman, “Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll.” 46 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 47 Ibid. 48 Erman, “Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll.” 49 J.B. Stewart, “Eight Days.” 50 Straders, “The Lehman Brothers Collapse and How It’s Changed the Economy Today.” 51 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. Bibliography Antoncic, M. (2018, September). Opinion | Lehman Failed for Good Reasons. The New York Times . Retrieved from https://www.nytimes.com/2018/09/17/ opinion/lehman-brothers- financial-crisis.html Ball, L. (2016). THE FED AND LEHMAN BROTHERS . 218. Calabria, M. (2014). Letting Lehman Fail Was a Choice, and It Was the Right One | Cato Institute. Retrieved December 7, 2019, from https://www. cato.org/publications/commentary/letting-lehman-fail-was-choice-it-was- right-one Chu, Kathy. 2008. “Barclays Ends Talks to Buy Lehman Brothers.” ABC News . Retrieved January 3, 2021, from https://abcnews.go.com/Business/sto- ry?id=5800790&page=1 Clark, Andrew. 2010. “Lehman Brothers Rescue Would Have Been Unlaw- ful, Insists Bernanke.” The Guardian . Retrieved January 1, 2021 (http:// www.theguardian.com/business/2010/sep/02/lehman-bailout-unlaw- ful-says-bernanke). Erman, M. (2013, September 15). Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll. Reuters . Retrieved from https://www. reuters.com/article/us-wallstreet- crisis-idUSBRE98E06Q20130915 Fettig, D. (2008, June). The History of a Powerful Paragraph | Federal Reserve Bank of Minneapolis . https://www.minneapolisfed.org:443/article/2008/the-histo- ry-of-a- powerful-paragraph Geithner, T., & Metrick, A. (2018). Ten Years after the Financial Crisis: A Conver- sation with Timothy Geithner . Retrieved from https://www.ssrn.com/ab- stract=3246017 Grunwald, M. (2014, September). The Truth About the Wall Street Bailouts | Time. Retrieved December 7, 2019, from https://time.com/3450110/ aig-lehman/ Kathryn Judge. (2018, September 11). Lehman Brothers: How Good Policy Can Make Bad Law. Retrieved December 3, 2019, from CLS Blue Sky Blog website: http://clsbluesky.law.columbia.edu/2018/09/11/lehman-brothers- how-good-policy-can-make-bad-law/ O’Brien, M. (2018, September). Would saving Lehman have saved us from the Great Recession? - The Washington Post. Retrieved December 4, 2019, from https://www.washingtonpost.com/business/2018/09/20/would-sav- ing-lehman-have- saved-us-great-recession/ Reinhart, V. (2011). A Year of Living Dangerously: The Management of the Fi- nancial Crisis in 2008. Journal of Economic Perspectives , 25 (1), 71–90. Re- trieved from https://doi.org/10.1257/jep.25.1.71 Skeel, D. (2018, September 20). History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong. Retrieved November 17, 2019, from Brookings website: https://www.brookings.edu/ research/history-credits-lehman-brothers-collapse-for- the-2008-financial- crisis-heres-why-that-narrative-is-wrong/ Spector, S. C. and M. (2010, March 13). Repos Played a Key Role in Lehman’s Demise. Wall Street Journal . Retrieved from https://www.wsj.com/articles/ SB10001424052748703447104575118150651790066 Sraders, A. (2018). The Lehman Brothers Collapse and How It’s Changed the Economy Today. Retrieved December 9, 2019, from Stock Market—Busi- ness News, Market Data, Stock Analysis—TheStreet website: https://www. thestreet.com/markets/lehman-brothers- collapse-14703153 Stewart, J.B. (2009, September). Eight Days | The New Yorker. Retrieved De- cember 7, 2019, from https://www.newyorker.com/magazine/2009/09/21/ eight-days Stewart, J. B., & Eavis, P. (2014, September 29). Revisiting the Lehman Brothers Bailout That Never Was. The New York Times . Retrieved from https://www. nytimes.com/2014/09/30/business/revisiting-the-lehman-brothers-bail- out-that-never- was.html Tarhan, V. (1995). Does the federal reserve affect asset prices? Journal of Econom- ic Dynamics and Control , 19 (5), 1199–1222. Retrieved from https://doi. org/10.1016/0165-1889(94)00824- 2 The Fed—Primary Dealer Credit Facility (PDCF). (n.d.). Retrieved December 5, 2019, from https://www.federalreserve.gov/regreform/reform-pdcf.htm The Financial Crisis Inquiry Report . (2011). PublicAffairs. Previous Next

  • Ticketmaster | brownjppe

    Rewriting the Antitrust Setlist: Examining the Live Nation-Ticketmaster Lawsuit and its Implications for Modern Antitrust Law Katya Tolunsky Author Malcolm Furman Arjun Ray Editors I. Introduction On November 15, 2022, the music industry witnessed an unprecedented event that would become a turning point in discussions about ticketing practices and market dominance. Millions of devoted Taylor Swift fans were devastated when they failed to secure tickets for the highly anticipated Eras Tour. The ticket release sparked chaos, with fans enduring hours–even days–on Ticketmaster’s website, battling extended delays, technical glitches, and unpredictable price fluctuations. Despite their unwavering persistence, many “Swifties” were left empty-handed. This high-profile debacle ignited a firestorm of criticism from politicians and consumers alike, who questioned Ticketmaster’s apparent lack of preparedness for the overwhelming demand. While not an isolated incident of consumer dissatisfaction, the scale of this event and the passionate outcry from Swift’s fan base catapulted long-standing issues with ticket availability, pricing, and fees into the national spotlight. The “Swift ticket fiasco” became a catalyst for broader scrutiny of Ticketmaster’s business practices. Lawmakers and consumer advocacy groups called for investigations into the company’s business model, while accusations circulated about Ticketmaster leveraging its market power to stifle competition and maintain high fees. This perfect storm of events set the stage for a renewed examination of antitrust concerns in the live entertainment industry, bringing the anticompetitive practices of Live Nation-Ticketmaster into the public political and legal spotlight. On May 23, 2024, the U.S. Department of Justice (DOJ) filed a civil antitrust lawsuit against Live Nation Entertainment (the merged company) for allegedly violating the terms of a 2010 settlement, which required Ticketmaster to license its software to competitors and prohibited Live Nation from retaliating against venues that use competing ticketing services, and engaging in anticompetitive practices. The DOJ’s complaint argues that Live Nation has used its control over concert venues and artists to pressure venues into using Ticketmaster and to punish those that don’t, effectively excluding rival ticketing services from the market. the DOJ is suing Live Nation-Ticketmaster for violating Section 2 of the Sherman Antitrust Act and monopolizing markets across the live concert industry. This suit raises important questions about the application of the Sherman Act and the evolving approach to antitrust enforcement in the United States. At the heart of this case lies a fundamental clash between two competing philosophies of antitrust enforcement. For decades, the Chicago School approach has dominated American antitrust law, focusing narrowly on consumer welfare through the lens of prices and economic efficiency. However, a new perspective has emerged to challenge this framework. The “New Brandeis” movement, named after Supreme Court Justice Louis Brandeis and championed by current FTC Chair Lina Khan, advocates for a broader understanding of competition law that considers market structure, concentration of economic power, and impacts on democracy—not just consumer prices. As this movement antitrust movement gains prominence and momentum, the Live Nation-Ticketmaster case represents a critical test for the application of Section 2 of the Sherman Act in the digital age. The outcome of this case will set important precedents for how antitrust law is applied to companies that dominate multiple interconnected markets. This paper seeks to analyze the evolution of antitrust law in the context of this Live Nation-Ticketmaster lawsuit. First, this paper details the 2010 LiveNation/Ticketmaster merger, the extensive criticism of this merger, and the terms of the merger. Second, this paper delves into the relevant history of the Sherman Antitrust Act and the evolution and enforcement of antitrust and monopoly law in the last one hundred years. Additionally, to illustrate the scope of anticompetitive behavior and ways in which past antitrust cases have been prosecuted, the paper examines several notable cases concerning Section 2 of the Sherman Act. Third, this paper explores the recent shift in approach, characterized by the New Brandeis movement, to antitrust law and the broader debate surrounding the purpose and scope of antitrust enforcement. Lastly, this paper seeks to situate the Live Nation-Ticketmaster lawsuit in the context of this debate and analyze the implications and potential outcomes of this suit. Ultimately, this paper seeks to show that the DOJ’s original approval of the Live Nation-Ticketmaster merger in 2010 with behavioral remedies was inadequate in preventing anticompetitive practices and protecting consumer interests, and that structural remedies (such as breaking up the company) are necessary to restore effective competition in the live entertainment industry. The Live Nation-Ticketmaster merger in 2010 and its subsequent negative impact on consumers and the live entertainment industry serve as an excellent example to illustrate the insufficient nature of the traditional consumer welfare-focused antitrust enforcement in addressing the complexities of modern markets, particularly in industries like live entertainment where vertical integration can lead to subtle forms of anticompetitive behavior. By examining how Live Nation's market power is reinforced through its data advantages and “flywheel” business model, this paper demonstrates why traditional antitrust frameworks struggle to address such modern competitive dynamics. Ultimately, this paper argues that the Live Nation-Ticketmaster case demonstrates the need for a broader interpretation and more aggressive enforcement approach of antitrust law, aligning with the New Brandeis approach. II. The Live Nation-Ticketmaster Merger: Antitrust Considerations and Regulatory Response In 2010, Live Nation, the world’s largest concert promoter, merged with Ticketmaster, the world’s dominant ticketing platform. At the time of the merger, Ticketmaster held an effective monopoly in the ticket sales market, with an estimated 80% market share for concerts in large venues. In 2008, Live Nation launched its own ticketing platform, positioning itself as a rival to Ticketmaster by offering competitive pricing, leveraging its existing relationships with venues and artists, and promising to reduce service fees. This direct competition in ticketing, combined with Live Nation's dominant position in concert promotion, posed a significant threat to Ticketmaster's monopoly, which the merger would eliminate. Critics argued that the merger would lead to higher ticket prices, reduced competition, and a worse experience for consumers. In his 2009 testimony before the Senate Committee on the Judiciary, Subcommittee on Antitrust, Competition Policy and Consumer Rights, Senior Fellow for the American Progress Action Fund David Balto said, “Eliminating a nascent competitor by acquisition raises the most serious antitrust concerns…By acquiring Ticketmaster, Live Nation will cut off the air supply for any future rival to challenge its monopoly in the ticket distribution market.” Despite this widespread criticism of the proposed merger and its potential consequences, the DOJ approved the merger. However, the DOJ still recognized the potential threats and consumer criticism of the merger. In response to these concerns, the DOJ referred to the limits of antitrust enforcement, noting that the DOJ’s role is to prevent anticompetitive harms from mergers, not to remake industries or address all consumer complaints. In a speech delivered on March 18th, 2010, titled “The Ticketmaster/Live Nation Merger Review and Consent Decree in Perspective,” Assistant Attorney General for the Antitrust Division Christine A Varney said: “Our concern is with competitive market structure, so our job is to prevent the anticompetitive harms that a merger presents. That is a limited role: whatever we might want a particular market to look like, a merger does not provide us an open invitation to remake an industry or a firm’s business model to make it more consumer friendly…In the course of investigating this merger, we heard many complaints about trends in the live music industry, and many complaints from consumers about Ticketmaster. I understand that people view Ticketmaster’s charges, and perhaps all ticketing fees in general, as unfair, too high, inescapable, and confusing. We heard that it is impossible to understand the litany of fees and why those fees have proliferated. I also understand that consolidation has been going on in the industry for some time and the resultant economic pressures facing local management companies and promoters. Those are meaningful concerns, but many of them are not antitrust concerns. If they come from a lack of effective competition, then we hope to treat them as symptoms as we seek to cure the underlying disease. Where such issues concern consumer fairness, however, they are better addressed by other federal agencies.” Varney’s statement delineates a narrow view of the DOJ's role in merger review, focusing primarily on preventing specific antitrust violations rather than addressing broader consumer concerns or industry trends. This approach suggests that the DOJ saw its mandate as limited to addressing anticompetitive harms directly related to the merger, rather than using the merger review process to address wider industry problems or consumer dissatisfaction that fall outside the scope of antitrust law. The merger itself included both horizontal (direct competitors merging) and vertical (different levels of supply chain merging) integration concerns. The DOJ approved the merger with certain conditions: Ticketmaster had to sell Paciolan (its self-ticketing company), Ticketmaster had to license its software to Anschutz Entertainment Group (AEG), and most importantly, LiveNation was prohibited from retaliating against venues that use competing ticketing services. In the merger settlement, the DOJ stated that they would monitor compliance with the agreement for ten years and establish an Order Compliance Committee to receive reports of concerning behavior from industry players. The DOJ also emphasized the importance of industry participation in monitoring and reporting potential violations of the agreement or antitrust laws. These conditions were intended to address the most immediate competitive concerns raised by the merger. Thus, the DOJ primarily relied on behavioral remedies rather than structural changes, an approach that would later be criticized as insufficient to prevent anticompetitive practices. Structural changes, in contrast, could have involved more drastic measures such as requiring the divestiture of certain business units, breaking up the merged entity into separate companies, or imposing limitations on the company's ability to operate in multiple segments of the live entertainment industry. These types of structural remedies aim to fundamentally alter the company's market position and capabilities, rather than merely regulating its behavior. In addition, the reliance on industry self-reporting and time-limited monitoring also raised questions about the long-term effectiveness of these measures. In retrospect, the DOJ’s approach to the Live Nation-Ticketmaster merger exemplifies the limitations of traditional antitrust enforcement in addressing complex, vertically integrated industries. By focusing on narrow, immediate competitive effects and relying heavily on behavioral remedies, the DOJ underestimated the long-term impact of the merger on market dynamics in the live entertainment industry. This case would later become a touchstone in debates about the adequacy of existing antitrust frameworks and the need for more comprehensive approaches to merger review and enforcement. III. The Sherman Act and the Evolution of Antitrust Jurisprudence The Sherman Antitrust Act, passed in 1890, was a landmark piece of legislation that emerged from the economic and political turmoil of the late 19th century’s Gilded Age. This era saw rapid industrialization and the rise of powerful trusts and monopolies that dominated key industries such as oil, steel, and railroads. These business entities, through their immense economic power, were able to stifle competition, manipulate prices, and exert immense influence on the political process. Public outcry against these practices grew, with farmers, small business owners, and laborers demanding government action to curb corporate excess. In response to these concerns, the Sherman Act became the first federal legislation to outlaw monopolistic business practices, particularly by prohibiting trusts. A trust in this context was an arrangement by which stockholders in several companies would transfer their shares to a single set of trustees, receiving in exchange a certificate entitling them to a specified share of the consolidated earnings of the jointly managed companies. This structure allowed for the concentration of economic power that the Act sought to prevent. The Sherman Act outlawed all contracts and conspiracies that unreasonably restrained interstate and foreign trade. Its authors believed that an efficient free market system was only possible with robust competition. While the Act targeted trusts, it also addressed monopolies – markets where a single company controls an entire industry. While the Sherman Act broadly addresses anticompetitive practices, Section 2 is particularly relevant to analyze the Live Nation-Ticketmaster case as it directly pertains to monopolization. Section 2 of the Sherman Act specifically prohibits monopolization, attempted monopolization, and conspiracies to monopolize. Essentially, it outlaws the acquisition or maintenance of monopoly power through unfair practices. However, it’s important to note that the purpose of Section 2 is not to eliminate monopolies entirely, but rather to promote a market-based economy and preserve competition. This nuanced approach taken by Section 2 recognizes that some monopolies may arise from superior business acumen or innovation, and only seeks to prevent those achieved or maintained through anticompetitive means. The Sherman Act laid the foundation for antitrust law in the United States, reflecting a societal commitment to maintaining competitive markets and limiting the concentration of economic power. Its passage marked a significant shift in the government’s role in regulating business practices and shaping the economic landscape. While the Sherman Act laid the groundwork for antitrust law in the United States, it was supplemented by two important pieces of legislation in 1914: the Clayton Antitrust Act and the Federal Trade Commission Act. The Clayton Act expanded on the Sherman Act by prohibiting specific anticompetitive practices such as price discrimination, exclusive dealing contracts, tying arrangements, and mergers that substantially lessen competition. The Federal Trade Commission Act created the Federal Trade Commission (FTC) as an independent regulatory agency to prevent unfair methods of competition and deceptive acts or practices in commerce. Together, these Acts addressed some of the Sherman Act’s limitations and provided more specific guidelines for antitrust enforcement, further solidifying the government’s commitment to maintaining competitive markets. The distinction between the Clayton Act and Sherman Act is particularly relevant to understanding the Live Nation-Ticketmaster case. Section 7 of the Clayton Act governs merger review, requiring pre-emptive intervention to prevent mergers that may substantially lessen competition. In contrast, Section 2 of the Sherman Act addresses anticompetitive conduct by existing monopolists. The 2010 Live Nation-Ticketmaster merger was reviewed under Clayton Act Section 7’s forward-looking standard, while the 2024 case challenges ongoing anticompetitive conduct under Sherman Act Section 2. This dual application of antitrust law to the same company highlights the complementary yet distinct roles of merger review and monopolization enforcement. The early enforcement and interpretation of the Sherman Act were shaped by landmark cases that helped define the scope and application of antitrust law. In Standard Oil Co. of New Jersey v. United States (1911), the Supreme Court established the “rule of reason” approach to analyzing antitrust violations. This case resulted in the breakup of Standard Oil, demonstrating the Act’s power to dismantle monopolies. The Court held that only “unreasonable” restraints of trade were prohibited, introducing a more limited interpretation of the Act. The “rule of reason” approach meant that the Court would consider the specific facts and circumstances of each case to determine whether a particular restraint of trade was unreasonable. The case also established that the Sherman Act should be interpreted in light of its broad policy goals rather than strictly construed. This approach had a significant impact on future antitrust enforcement. It allowed for a more flexible and adaptive application of the Act, enabling courts and regulators to address new forms of anticompetitive behavior as markets evolved. This interpretive framework empowered enforcers to look beyond the literal text of the Act and consider the overarching aims of promoting competition and protecting consumer welfare. As a result, antitrust enforcement could more effectively respond to changing economic conditions and business practices, particularly as industries became more complex and interconnected in the 20th century. Later, in United States v. Alcoa (1945), the Court of Appeals for the Second Circuit further refined the interpretation of the Sherman Act. Judge Learned Hand’s opinion clarified that merely possessing monopoly power is not illegal; rather, the Act prohibits the deliberate acquisition or maintenance of that power through exclusionary practices. Alcoa thus established an important distinction between achieving monopoly through superior skill, foresight, and industry, which is lawful, and maintaining it through anticompetitive conduct, which violates the Act. These cases illustrate the evolving understanding of the Sherman Act, moving from a strict interpretation to a more nuanced approach that considered market dynamics and the effects of business practices on competition. The mid-20th century saw a significant shift in antitrust enforcement characterized by a structural approach that focused on market concentration and firm size. This era, roughly spanning from the late 1930s to the early 1960s, was characterized by a prevailing view among federal antitrust authorities, economists, and policymakers that high market concentration was inherently harmful to competition. The passage of the Celler-Kefauver Act in 1950, which strengthened merger control, exemplified this approach. Influenced by economists from the Harvard School of industrial organization, particularly Joe Bain, antitrust authorities presumed that market structure determined conduct and performance. This “structure-conduct-performance” paradigm, central to the Harvard School's approach, posited that industry structure (like concentration levels) directly influenced firm behavior and market outcomes. This led to aggressive enforcement actions, including the breakup of large firms and the blocking of mergers that would have significantly increased market concentration. However, by the mid-1960s, antitrust thinking began to evolve, considering both market structure and firm conduct. This shift was reflected in the landmark 1966 Supreme Court case United States v. Grinnell Corp. , which established the modern two-part test for monopolization. The Grinnell test requires proof of both “the possession of monopoly power in the relevant market” and “the willful acquisition or maintenance of that power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.” This test, while still considering market power, introduced a focus on how that power was obtained or maintained. While the earlier era did consider power acquisition to some extent, the Grinnell test formalized and emphasized this aspect. It required a more comprehensive examination of a firm’s conduct and its effects on competition, moving beyond the primarily structural approach that often presumed anticompetitive effects from high market concentration alone. The Grinnell test has since been widely applied in monopolization cases under Section 2 of the Sherman Act, reflecting a more nuanced approach that aims to preserve competition without necessarily eliminating all monopolies. This evolution in antitrust enforcement demonstrates a move towards balancing concerns about market structure with considerations of firm conduct and efficiency. However, this balanced approach would soon give way to a more dramatic shift in antitrust philosophy that prioritized economic efficiency above other considerations. During the 1970s and 1980s, the Chicago School of Economics profoundly influenced the trajectory and scope of antitrust law and policy in the United States. This approach, led by economists and legal scholars such as Robert Bork, Richard Posner, and George Stigler, represented a significant shift in antitrust thinking. The Chicago School advocated for the “consumer welfare” standard as the primary goal of antitrust policy. This approach focused on economic efficiency and lower prices for consumers, rather than protecting competitors or maintaining a particular market structure. They argued that many practices previously considered anticompetitive could actually benefit consumers through increased efficiency. For example, Chicago School theorists argued that many mergers, even those that increased market concentration, could lead to efficiencies that benefit consumers. These efficiencies could manifest in several ways: through economies of scale that reduce production costs and potentially lower prices; through improved resource allocation that enhances product quality or variety; or through increased innovation. The Chicago School contended that these efficiency gains could outweigh potential negative effects of increased market concentration, ultimately resulting in net benefits for consumers in the form of lower prices, better products, or increased innovation. This led to a more lenient approach to DOJ merger review, with a higher bar for proving that a merger would harm competition. Vertical mergers (between companies at different levels of the supply chain) were viewed particularly favorably, as they were seen as potentially efficiency-enhancing. The Chicago School was skeptical of claims that vertical integration or vertical restraints (like exclusive dealing arrangements) were inherently anticompetitive. They argued that these practices often had pro-competitive justifications and should be judged based on their economic effects rather than per se rules. The Chicago School was driven by a strong belief in the self-correcting nature of markets. This thinking greatly influenced antitrust enforcement agencies and courts during the Reagan administration and beyond. It led to a significant reduction in antitrust enforcement actions and a higher bar for proving anticompetitive harm. This shift represented a move away from the structural approach of the mid-20th century towards a more economics-focused, effects-based analysis of competitive harm. Antitrust attorney William Markham offers a scathing critique of the consumer welfare standard’s impact on antitrust enforcement. He argues that since the late 1970s, courts have adopted increasingly restrictive antitrust doctrines based on this standard, which he views as misnamed and harmful to consumers. Markham contends that these doctrines have allowed various forms of monopolistic and anticompetitive practices to flourish unchecked. He states that the standard permits such practices “so long as the offenders take care not to charge prices that are demonstrably and provably supracompetitive.” This critique highlights how the narrow focus on consumer prices under the consumer welfare standard may overlook other forms of competitive harm. It’s important to understand this context when examining more recent developments and debates in antitrust law, including the challenges posed by digital markets and the arguments of the New Brandeis movement. IV. Judicial Interpretation of Section 2: Key Cases and Anticompetitive Practices To better understand how Section 2 of the Sherman Act has been applied in practice, it’s important to examine key antitrust cases that have shaped its interpretation and enforcement. These cases not only illustrate various types of anti-competitive practices but also demonstrate the evolution of antitrust thinking, particularly the rising influence of the Chicago School’s consumer welfare standard and subsequent challenges to this approach. Anticompetitive practices can take many forms, including refusals to deal, predatory pricing, tying, and exclusive dealing arrangements. Their legality often depends on specific facts, market conditions, and the prevailing economic theories of the time. This section examines several landmark cases that highlight these practices and trace the trajectory of antitrust law from the mid-1980s through the early 2000s, a period marked by significant shifts in antitrust philosophy and enforcement approaches. The 1985 Supreme Court case Aspen Skiing Co. v. Aspen Highlands Skiing Corp. marked a significant development in antitrust law’s approach to refusal to deal practices, a type of anticompetitive behavior where a firm with market power declines to do business with a competitor. The case involved Aspen Skiing Company, which owned three of four ski areas in Aspen, CO, discontinuing a long-standing joint lift ticket program with Aspen Highlands, the owner of the fourth area. While the Chicago School approachgenerally viewed refusals to deal as permissible, the Court in this case took a different stance. It ruled that this refusal to continue a voluntary cooperative venture could violate Section 2 of the Sherman Act, as it lacked any normal business justification and appeared designed to eliminate competition. This decision, occurring early in the ascendancy of the Chicago School, demonstrated a willingness to consider factors beyond short-term consumer welfare in antitrust analysis. Justice Stevens’ opinion emphasized the importance of intent in determining whether conduct is “exclusionary,” “anticompetitive,” or “predatory,” introducing a more contextualized approach to assessing market behavior. While not fully embracing the consumer welfare standard, the Court did consider the impact on consumers, noting that the joint ticket was popular and its elimination inconvenienced skiers. This case thus represents a crucial step in the evolution of antitrust law, bridging the gap between earlier, more aggressive interpretations of the Sherman Act and the more economics-focused analyses that would follow. It expanded the scope of antitrust enforcement by establishing that, in some cases, even a unilateral refusal to deal could be considered anticompetitive. Aspen Skiing set the stage for later cases dealing with complex market dynamics, particularly in industries where control over key resources or platforms can significantly impact competition – a concept that becomes increasingly relevant in the digital age and in cases like the Live Nation-Ticketmaster merger. As antitrust thinking continued to evolve, the influence of the Chicago School became more pronounced, as evidenced in subsequent landmark cases. This shift was reinforced by changes in the Supreme Court’s composition during the 1970s and 1980s, with appointments by Presidents Nixon and Reagan bringing more conservative justices to the bench who were often sympathetic to Chicago School economic theories. This changing court composition, coupled with the growing academic influence of the Chicago School, contributed to the changes in antitrust jurisprudence. The 1993 Supreme Court case Brooke Group Ltd. v. Brown & Williamson Tobacco Corp. marked a significant move in the treatment of predatory pricing claims, reflecting the growing dominance of the Chicago School’s consumer welfare standard. Predatory pricing occurs when a firm prices its products below cost with the intention of driving competitors out of the market, allowing the predator to later raise prices and recoup its losses. In this case, the Brooke Group accused Brown & Williamson of predatory pricing in the generic cigarette market. The Court established a two-pronged test for predatory pricing: (1) the plaintiff must prove that the prices are below an appropriate measure of cost, and (2) the plaintiff must demonstrate that the predator had a “reasonable prospect” of recouping its losses. This stringent standard, making predatory pricing claims extremely difficult to prove, clearly reflects the Chicago School’s skepticism towards such claims against firms. The Court’s reasoning prioritized short-term consumer benefits (lower prices) over long-term competitive concerns, embodying the consumer welfare standard. Justice Kennedy’s majority opinion explicitly cited Chicago School scholars, demonstrating how economic theory had come to dominate antitrust jurisprudence. This case illustrates how the Chicago School approach narrowed the scope of antitrust enforcement, potentially allowing some anticompetitive practices to escape scrutiny if they resulted in short-term consumer benefits. In the context of cases like Live Nation-Ticketmaster, this ruling underscores the challenges in proving anticompetitive behavior when short-term consumer benefits are present. The rise of the digital economy in the late 1990s and early 2000s presented new challenges to antitrust enforcement, leading to a reconsideration of established doctrines. While the Chicago School’s influence remained strong, the emergence of new technologies and business models began to test the limits of its consumer welfare-focused approach. The United States v. Microsoft Corp. (2001) case marked a pivotal moment in antitrust law’s application to the emerging digital economy, introducing new considerations for tying and monopoly maintenance in software markets. Tying occurs when a company requires customers who purchase one product to also purchase a separate product, potentially leveraging dominance in one market to gain advantage in another. The U.S. government accused Microsoft of illegally maintaining its monopoly in the PC operating systems market by tying its Internet Explorer browser to the Windows operating system and engaging in exclusionary contracts with PC manufacturers and Internet service providers. This case challenged the Chicago School's typically permissive view of tying arrangements, which often saw them as enhancing efficiency from a consumer welfare standpoint. The Court of Appeals for the D.C. Circuit ruled that Microsoft had violated Section 2 of the Sherman Act, finding that Microsoft’s practices, in aggregate, served to maintain its monopoly power by stifling competition from potential disruptors like Netscape’s browser and Sun’s Java technologies. While the court’s analysis still employed the consumer welfare standard, it showed a willingness to consider a broader range of anticompetitive effects, including harm to innovation and potential future competition. This approach reflected a nuanced evolution of antitrust thinking, acknowledging the unique characteristics of software markets and the rapid pace of technological change. Microsoft set important precedents for how antitrust law could be applied to fast-moving technology markets and platform economies, influencing later cases involving tech giants and potentially informing the analysis of platform-based businesses like Live Nation-Ticketmaster. It demonstrated that even in the era of Chicago School dominance, courts could adapt antitrust principles to address new forms of market power in the digital age. The resulting settlement, which imposed behavioral remedies rather than structural ones, sparked ongoing debates about the adequacy of traditional antitrust tools in addressing the unique characteristics of digital markets. Despite the more comprehensive and context-specific approach in Microsoft , the influence of the Chicago School remained strong, as demonstrated in the next significant case. Verizon Communications Inc. v. Law Offices of Curtis V. Trinko, LLP (2004) significantly narrowed the scope of antitrust liability for refusal to deal, revisiting and limiting the principles established in Aspen Skiing . In this case, Trinko, a law firm and Verizon customer, alleged that Verizon had violated Section 2 of the Sherman Act by providing insufficient assistance to new competitors in the local telephone service market, as required by the 1996 Telecommunications Act. The Court, in a unanimous decision authored by Justice Antonin Scalia, ruled in favor of Verizon, significantly limiting the circumstances under which a refusal to deal could violate antitrust law. Unlike in Aspen Skiing , where there was a history of voluntary cooperation, the Court emphasized that firms, even monopolists, generally have no duty to assist competitors. This ruling clearly reflects the Chicago School’s skepticism towards government intervention in markets and its focus on efficiency over other competitive concerns. The Court emphasized the importance of allowing firms to freely choose their business partners, arguing that forced cooperation could reduce companies’ incentives to invest and innovate. This aligns with the Chicago School’s concern about “false positives” in antitrust enforcement – the idea that overly aggressive antitrust action might mistakenly punish pro-competitive behavior, potentially discouraging beneficial business practices. By setting a high bar for refusal to deal claims, the Trinko decision further constrained the reach of antitrust law, potentially allowing monopolists more leeway in their dealings with competitors. By setting a high bar for refusal to deal claims, the Trinko decision further constrained the reach of antitrust law, potentially allowing monopolists more leeway in their dealings with competitors. This legal environment, which emphasized a narrow interpretation of anticompetitive behavior, set the stage for future mergers that consolidated market power across related industries. The 2010 approval of the Live Nation-Ticketmaster merger is a prime example of how this permissive approach to antitrust enforcement allowed for the creation of a vertically integrated entity with unprecedented control over the live entertainment industry. This case exemplifies how the Chicago School approach may have inadvertently created blind spots in antitrust enforcement, particularly regarding the long-term effects of monopoly power on innovation and competition. These cases collectively demonstrate the complex evolution of Section 2 application across various industries and business practices. From the nuanced approach in Aspen Skiing , through the height of Chicago School influence in Brooke Group and Trinko , to the adaptation to new technological challenges in Microsoft , they illustrate how antitrust law has grappled with changing economic theories and market realities. The cases show a clear trajectory of increasing influence of the Chicago School’s consumer welfare standard, but also reveal moments of resistance or adaptation to this approach when confronted with novel market dynamics. The Microsoft case, in particular, marks a significant point in this evolution, demonstrating how courts began to recognize the unique challenges posed by the digital economy. By examining these cases, it is possible to trace how the interpretation and application of Section 2 of the Sherman Act has shifted over time, reflecting changing economic theories and market realities. This evolution provides crucial context for understanding current debates about antitrust enforcement, particularly in rapidly evolving digital markets, and sets the stage for the emergence of new approaches like the New Brandeis movement. In considering the Live Nation-Ticketmaster case, this historical context helps to understand the complex landscape of antitrust enforcement and the challenges in addressing anticompetitive behavior today. V. The New Brandeis Movement: Redefining Antitrust for the Modern Era The landscape of antitrust enforcement is undergoing a fundamental shift as new perspectives challenge long-held assumptions about competition law. The limitations of the Chicago School approach, particularly evident in cases like Microsoft and Trinko , have sparked a reimagining of antitrust’s fundamental purposes and tools. As University of Michigan Law Professor Daniel Crane noted recently, “the bipartisan consensus that antitrust should solely focus on economic efficiency and consumer welfare has quite suddenly come under attack from prominent voices [from the political left and right] calling for a dramatically enhanced role for antitrust law in mediating a variety of social, economic, and political friction points, including employment, wealth inequality, data privacy and security, and democratic values.” At the heart of this antitrust approach evolution lies a debate between the traditional consumer welfare-focused approach and the emerging New Brandeis movement. For decades, the standard approach has emphasized consumer welfare as the primary goal, focusing on economic efficiency and preventing practices that directly harm consumers through higher prices, reduced output, or decreased innovation. This framework has generally led to a more permissive attitude toward mergers and a higher bar for finding antitrust violations. In contrast, the New Brandeis movement, championed by figures like FTC Chairwoman Lina Khan, advocates for a broader understanding of antitrust law’s goals. This perspective, sometimes critically dubbed “hipster antitrust,” contends that enforcement should consider additional factors such as market structure, the distribution of economic power, and the impact on workers, small businesses, and political democracy. The movement’s proponents have been particularly vocal about the need to reassess antitrust approaches in the context of the digital economy, expressing concern over the power wielded by large tech platforms. Lina Khan, a prominent figure in contemporary antitrust discourse, has developed an extensive body of work articulating the principles of the New Brandeis movement. In her article “The New Brandeis Movement: America’s Antimonopoly Debate,” Khan outlines this approach, which draws inspiration from Justice Louis Brandeis’s support of “America’s Madisonian traditions—which aim at a democratic distribution of power and opportunity in the political economy.” The movement represents a significant departure from the Chicago School of antitrust thinking. While the Chicago School emphasized efficiency, prices, and consumer welfare, the New Brandeis approach advocates for a return to a market structure-oriented competition policy. Key tenets include viewing economic power as intrinsically tied to political power, recognizing that some industries naturally tend towards monopoly and require regulation, emphasizing the structures and processes of competition rather than just outcomes, and rejecting the notion of natural market “forces” naturally leading to optimal economic outcomes or consumer welfare, instead understanding markets as fundamentally shaped and structured by law and policy. In her article “The Ideological Roots of America’s Market Power Problem,” Khan further critiques the current antitrust framework, arguing that it has weakened enforcement and allowed high concentration of market power across sectors. She asserts that addressing this issue requires challenging the ideological underpinnings of the current framework, writing, “Identifying paths for greater enforcement within a framework that systematically disfavors enforcement will fall short of addressing the scope of the market power problem we face today.” Ultimately, Khan and other New Brandeis proponents argue for a fundamental rethinking of antitrust’s goals and methods, advocating a return to its original purpose of distributing economic power and preserving democratic values. Building upon her critique of current antitrust frameworks, Khan has written extensively about the unique challenges posed by big tech companies, arguing that traditional enforcement methods are inadequate to address their market power. In her influential article “Amazon’s Antitrust Paradox,” Khan contends that the current antitrust framework is ill-equipped to tackle the anticompetitive effects of digital platforms like Amazon. These platforms, she argues, can leverage their market power and access to data to engage in predatory pricing, disadvantage rivals, and entrench their dominance. Khan writes in the abstract, “This Note argues that the current framework in antitrust—specifically its pegging competition to ‘consumer welfare,’ defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output.” The article explains that despite Amazon’s massive growth, it generates low profits, often pricing products below cost and focusing on expansion rather than short-term gains. This strategy has allowed Amazon to expand far beyond retail, becoming a major player in various sectors including marketing, publishing, entertainment, hardware manufacturing, and cloud computing. Khan argues that this positions Amazon as a critical platform for many other businesses. She further elaborates, “First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.” Khan argues that in platform markets like Amazon's, predatory pricing can be rational even if product prices appear to be at market rates. This is because the goal is not immediate profit, but rather to rapidly expand market share and establish dominance. The company can sustain short-term losses or razor-thin margins on product sales because the real value lies in becoming the dominant platform, which can lead to long-term profitability through various means such as data collection. Traditional antitrust doctrine, however, often assumes that below-cost pricing is irrational unless the company can quickly recoup its losses through higher prices, which may not apply in these complex, multi-sided markets. This creates a “paradox” where Amazon’s practices may be anticompetitive, yet they escape scrutiny under existing regulations. To address Amazon’s market power, one of Khan’s major suggestions includes restoring traditional antitrust and competition policy principles to its more structure-oriented approach. Khan’s influential academic critiques of current antitrust frameworks, particularly her analysis of Amazon’s market power, laid the groundwork for her approach as FTC chair, where she has sought to translate these ideas into concrete enforcement actions. Since Lina Khan’s appointment as chair of the FTC in 2021 by President Joe Biden, the agency has embarked on a more aggressive approach to antitrust enforcement, challenging some of America’s largest corporations and implementing significant policy shifts. This new direction has yielded mixed results and sparked debates about the future of competition policy in the United States. Khan’s FTC has increased scrutiny of Big Tech, filing an amended antitrust complaint against Facebook (Meta) that challenges its acquisitions of Instagram and WhatsApp, and suing to block Microsoft’s acquisition of Activision Blizzard, citing competition concerns in the video game industry. The agency has also initiated actions against other tech giants like Amazon. Under Khan’s leadership, the FTC has implemented stricter merger enforcement, including a more aggressive approach to reviewing mergers, particularly vertical mergers. The agency withdrew the 2020 Vertical Merger Guidelines, signaling skepticism towards vertical integration, and revised merger guidelines in collaboration with the Department of Justice. There’s also been an increased focus on “killer acquisitions” where large companies buy potential competitors. Khan has emphasized structural remedies over behavioral ones, advocating for more dramatic interventions like breaking up companies in certain cases. Additionally, recognizing the growing importance of data as a competitive asset, the FTC has integrated privacy and data protection concerns into its antitrust approach. For instance, the agency pursued a case against data broker Kochava for selling sensitive geolocation data, highlighting how control over user data can contribute to market power and potentially anticompetitive practices in the digital economy. The implementation of Khan’s approach has seen both successes and setbacks. Partial victories include the FTC v. Facebook (Meta) case, where the court allowed a revised complaint to proceed, and the FTC v. Illumina/Grail case, where the agency successfully challenged a vertical merger, albeit on largely traditional antitrust grounds. However, the FTC faced a setback when its attempt to block Meta’s acquisition of Within Unlimited was rejected. Ongoing challenges persist as courts have shown varying degrees of receptiveness to the expanded view of antitrust harm. As of April 2024, there had been no definitive high-level court ruling fully endorsing or rejecting the New Brandeis approach, with many decisions still relying heavily on the consumer welfare standard. Khan also faces political opposition and challenges to her rule-making initiatives. While Khan has successfully shifted the FTC’s focus towards more aggressive antitrust enforcement and brought increased attention to issues like data privacy and labor market effects, the legal and practical adoption of the New Brandeis philosophy remains a work in progress. The evolving legal landscape sets the stage for analyzing how future cases, such as potential actions against Ticketmaster, might proceed under this new, more expansive view of antitrust enforcement. VI. The Live Nation-Ticketmaster Case: A Critical Analysis of Market Power and Competitive Effects In May 2024, the DOJ, in addition to 30 state and district attorneys general, filed a civil antitrust lawsuit against Live Nation Entertainment Inc. and its wholly owned subsidiary Ticketmaster “for monopolization and other unlawful conduct that thwarts competition in markets across the live entertainment industry.” More specifically, the DOJ accused Live Nation for violating Section 2 of the Sherman Act. In a subsequent press release, the DOJ highlighted several key issues resulting from Live Nation-Ticketmaster’s conduct. The DOJ argued that the company’s practices have led to a lack of innovation in ticketing, higher prices for U.S. consumers compared to other countries, and the use of outdated technology. Further, the DOJ asserted that Live Nation-Ticketmaster “exercises its power over performers, venues, and independent promoters in ways that harm competition” and “imposes barriers to competition that limit the entry and expansion of its rivals.” The lawsuit, which calls for structural relief – primarily the breakup of Live Nation and Ticketmaster – aims to reintroduce competition in the live concert industry, offer fans better options at more affordable prices, and create more opportunities for musicians and other performers at venues. The DOJ claims Live Nation-Ticketmaster uses a “flywheel” business model that self-reinforces its market dominance. This model involves using revenue from fans and sponsorships to secure exclusive deals with artists and venues, creating a cycle that excludes competitors. The complaint outlines several anti-competitive practices, including: partnering with potential rival Oak View Group to avoid competition, threatening retaliation against venues working with competitors, using long-term exclusive contracts with venues, restricting artists’ venue access unless they use Live Nation’s promotion services, and acquiring smaller competitors. The DOJ argues these practices create barriers for rivals to compete fairly. Live Nation Entertainment is the world’s largest live entertainment company, controlling numerous venues and generating over $22 billion in annual revenue globally. The DOJ’s action aims to address these alleged monopolistic practices in the live entertainment industry. Attorney General Merrick B. Garland said, “We contend that Live Nation uses illegal and anti-competitive methods to dominate the live events industry in the U.S., negatively impacting fans, artists, smaller promoters, and venue operators. This dominance leads to higher fees for fans, fewer concert opportunities for artists, reduced chances for smaller promoters, and limited ticketing options for venues. It’s time to break up Live Nation-Ticketmaster.” Beyond traditional market control, Live Nation’s monopolistic position is further entrenched by its significant data advantages, which raise additional competitive and privacy concerns. Through its ticketing operations and venue management, Live Nation amasses vast amounts of consumer data, including purchasing habits, musical preferences, and demographic information. This data not only enhances Live Nation’s ability to target marketing and adjust pricing strategies but also creates a major barrier to entry for potential competitors who lack access to such comprehensive consumer insights. Moreover, the company’s control over this data raises privacy concerns, as consumers may have limited understanding of how their information is being used or shared across Live Nation’s various business segments. These issues mirror broader debates in the digital age about the role of data in maintaining market power, with parallels to concerns raised about tech giants like Google and Facebook. As such, any antitrust action against Live Nation must consider not only traditional measures of market power but also the competitive advantages and potential privacy implications of its data practices. This aspect of the case underscores the need for antitrust enforcement to evolve in response to the increasing importance of data in modern business models. Notably, the DOJ focuses on Live Nation-Ticketmaster’s anticompetitive tactic of threatening and retaliating against venues that work with rivals. In the press release, the DOJ writes, “Live Nation-Ticketmaster’s power in concert promotions means that every live concert venue knows choosing another promoter or ticketer comes with a risk of drawing an adverse reaction from Live Nation-Ticketmaster that would result in losing concerts, revenue, and fans.” This directly violates the terms of the 2010 merger agreement, in which LiveNation was prohibited from retaliating against venues that use competing ticketing services. Considering that the current lawsuit’s main goal is the breakup of Ticketmaster and Live Nation, there exists an undeniable irony that the DOJ is seeking to undo their own actions (approving the merger in 2010). The head of Jones Day’s antitrust practice Craig Waldman said, “The DOJ is breaking out a really big gun here — seeking to blow up a company that was created with its approval. That looms large even though the DOJ has and will continue to try to frame Live Nation’s conduct as going well beyond the scope of the merger.” In hindsight, it is clear that the DOJ’s approval of the 2010 merger was an egregious mistake. Vice president and director of competition policy at the Progressive Policy Institute Diana Moss said, “The Live Nation-Ticketmaster merger was allowed to proceed in 2010, but the decision was an abject failure of antitrust enforcement. Instead of blocking the merger, the DOJ required the company, then with an 80% share of the ticketing market, to comply with ineffective conditions.” The continued anticompetitive practices and market dominance of Live Nation-Ticketmaster after the approved merger demonstrate that behavioral remedies were insufficient to protect competition. As such, structural remedies, specifically breaking up the company, are necessary to restore competition in the live entertainment industry. That extensive pushback and criticism of the merger took place at the time of its approval highlights the limited scope and approach of antitrust enforcement, particularly when it comes to mergers. The Live Nation-Ticketmaster case will proceed in New York’s Southern District, known for its slow litigation process, potentially delaying a trial until late 2026. In its defense, Live Nation argues that it does not hold a monopoly, claiming that its profit margins are low and that ticket prices are influenced more by factors like artist popularity and secondary ticketing markets than by its own practices. Live Nation contends that the efficiencies achieved by merging with Ticketmaster benefit the industry by offering better services and prices compared to separating the companies. The company emphasizes that its vertical integration—combining promotion and ticketing services—creates a more efficient and artist-friendly business model. Live Nation also asserts that the secondary ticketing market, rather than its own practices, is primarily responsible for high ticket prices. The case will scrutinize whether the efficiencies claimed by Live Nation justify its market control or if the harm to competition outweighs these benefits. The DOJ’s push for a breakup, and refusal to settle for anything less than a breakup, reflects the relative success of the New Brandeis movement, particularly when considering the FTC’s revised merger guidelines in collaboration with the DOJ. When analyzed through the lens of the Grinnell test, Live Nation’s conduct clearly meets both prongs for monopolization under Section 2 of the Sherman Act. First, Live Nation undoubtedly possesses monopoly power in the relevant markets of concert promotion and ticketing. With an estimated 80% market share in ticketing for major concert venues and its dominant position in concert promotion, Live Nation far exceeds the typical thresholds courts have used to identify monopoly power. The company’s ability to impose high fees, dictate terms to artists and venues, and persistently maintain its market position despite widespread consumer dissatisfaction further evidences its monopoly power. Second, Live Nation has willfully acquired and maintained this power through exclusionary practices, not merely through superior products or business acumen. The DOJ’s complaint outlines numerous anti competitive tactics, including threatening retaliation against venues that use competing services, leveraging its control over artists to pressure venues, and using long-term exclusive contracts to lock out competitors. These practices go well beyond legitimate competition based on merit. Moreover, Live Nation strategic acquisitions of potential competitors and its alleged collusion with Oak View Group to avoid competition further demonstrate its willful maintenance of monopoly power. The company’s “flywheel” business model, while potentially efficient, serves to entrench its dominance across multiple markets in ways that foreclose competition. Thus, Live Nation’s conduct satisfies both prongs of the Grinnell test, strongly supporting the DOJ’s case for illegal monopolization. It’s important to note, however, that while the Grinnell test remains a fundamental framework cited in monopolization cases, its application in modern antitrust law has evolved and become more nuanced. In recent decades, courts have increasingly used the Grinnell test as a starting point rather than a definitive standard. The test is now supplemented with more sophisticated economic analyses. Therefore, while the Grinnell test will likely be referenced in the Live Nation case, the court's analysis is expected to be more comprehensive, potentially incorporating more recent precedents and economic theories to fully capture the nuances of Live Nation’s market position and conduct. The Live Nation-Ticketmaster case illuminates several fundamental limitations in current antitrust doctrine. First, the case demonstrates how the Chicago School’s permissive approach to vertical mergers, embedded in Clayton Act enforcement, systematically underestimates the long-term competitive threats posed by vertical integration in platform markets. Second, the case exposes the inherent weakness of behavioral remedies in addressing vertical merger concerns. The failure of the 2010 settlement’s behavioral conditions—despite their specificity and ongoing oversight—suggests that such remedies are fundamentally inadequate for controlling the conduct of vertically integrated firms with substantial market power. Third, and perhaps most significantly, the case reveals the challenging burden facing regulators under Section 2 of the Sherman Act once a vertically integrated entity has established market dominance. Even with clear evidence of exclusionary conduct, proving harm under current Section 2 doctrine requires navigating complex questions about market definition and competitive effects that may not fully capture the subtle ways in which vertical integration can entrench market power. The Consumer Welfare Standard, which has dominated antitrust analysis since the 1980s, is inadequate in fully capturing the anticompetitive harm caused by Live Nation’s practices. While this standard primarily focuses on consumer prices and output, it fails to account for the multifaceted nature of competition in the live entertainment industry. Certainly, the high ticket prices and fees imposed by Live Nation are relevant concerns under this framework. However, this narrow focus obscures the broader and more insidious effects of Live Nation’s market dominance. For instance, the standard doesn’t adequately address the reduced choices faced by venues, who often feel compelled to contract with Live Nation for fear of losing access to popular acts. Similarly, it fails to capture the constraints placed on artists, who may find their touring options limited by Live Nation’s control over major venues and promotion services. The standard also struggles to account for the barriers to entry the industry created by Live Nation’s vertically integrated structure and exclusive contracts, which stifle potential competitors and innovative business models in the ticketing and promotion markets. Moreover, the Consumer Welfare Standard’s short-term focus on prices neglects long-term impacts on innovation, diversity, and the overall health of the live entertainment ecosystem. It fails to account for how one company’s dominance can lead to less diverse music options and harm smaller venues and independent promoters who are crucial for supporting new artists. By focusing mainly on short-term price effects, the standard overlooks the broader, long-term damage to competition in the industry. This limitation of the Consumer Welfare Standard in the Live Nation case underscores the need for a more comprehensive approach to antitrust analysis, one that aligns more closely with the broader concerns of the New Brandeis movement. Building on the limitations of the Consumer Welfare Standard and the evolving application of the Grinnell test, it becomes clear that a more comprehensive approach to antitrust enforcement is necessary in the Live Nation case. The failure of the 2010 behavioral remedies further underscores this need. Despite prohibitions on retaliatory practices and requirements to license ticketing software to competitors, Live Nation has continued to dominate the market and engage in exclusionary conduct. This persistence of anticompetitive behavior, even under regulatory oversight, demonstrates that more robust, structural solutions are required. In retrospect, it is evident that the DOJ should have never approved the merger in the first place, as the vertical integration of Live Nation and Ticketmaster created a entity with unprecedented market power and clear incentives for anticompetitive behavior. In light of these considerations, the DOJ should argue for a full structural separation of Live Nation and Ticketmaster as the primary remedy. This breakup would reintroduce genuine competition into both the concert promotion and ticketing markets, addressing the root causes of Live Nation’s market power more effectively than behavioral conditions. To ensure a competitive landscape post-separation, the court should also consider supplementary measures. These could include prohibiting exclusive deals with venues and imposing limits on the percentage of a market’s concert promotion that Live Nation can control. By advocating for these comprehensive structural changes, the DOJ can align its approach with the more aggressive, market structure-focused enforcement advocated by the New Brandeis movement. This approach not only addresses the immediate concerns in the live entertainment industry but also sets a potential precedent for future antitrust cases in similarly complex, vertically integrated industries. It recognizes that in today’s interconnected markets, protecting competition requires looking beyond short-term price effects to consider the broader ecosystem of industry participants, from artists and venues to emerging competitors and consumers. VII. Conclusion The Live Nation-Ticketmaster case serves as a stark illustration of the inadequacies of traditional antitrust enforcement in addressing the complexities of modern markets. The DOJ’s original approval of the 2010 merger, despite widespread criticism and concerns, highlights the limitations of the consumer welfare-focused approach and the ineffectiveness of behavioral remedies in curbing anti competitive practices. The subsequent dominance of Live Nation in the live entertainment industry, characterized by its “flywheel” business model and alleged exclusionary practices, demonstrates the need for a more comprehensive and aggressive approach to antitrust enforcement. This case represents a critical juncture in the evolution of antitrust law, potentially marking a shift towards the more expansive view advocated by the New Brandeis movement. The DOJ’s pursuit of structural remedies, specifically the breakup of Live Nation and Ticketmaster, signals a recognition that protecting competition in today’s interconnected markets requires looking beyond short-term price effects to consider the broader ecosystem of industry participants. As such, the outcome of this case will have far-reaching implications for future antitrust enforcement, particularly in industries characterized by vertical integration and data-driven market power. It may set a precedent for how antitrust authorities approach complex, multi-faceted monopolies in the digital age, potentially reshaping the landscape of competition law for years to come. Ultimately, the Live Nation case underscores the urgent need for antitrust law to evolve in response to the changing nature of market power, ensuring that it remains an effective tool for promoting competition, innovation, and consumer welfare in the 21st-century economy. References Abad-Santos, Alex. “How Disappointed Taylor Swift Fans Explain Ticketmaster’s Monopoly.” Vox. Last modified November 21, 2022. https://www.vox.com/culture/2022/11/21/23471763/taylor-swift-ticketmaster-monopoly. Abbott, Alden. “Will the Antitrust Lawsuit against Live Nation Break Its Hold on Ticketmaster?” Forbes. Last modified May 28, 2024. https://www.forbes.com/sites/aldenabbott/2024/05/28/will-the-justice-departments-monopolization-lawsuit-kill-live-nation/. Abovyan, Kristina, and Quinn Scanlan. “FTC Is ‘just Getting Started’ as It Takes on Amazon, Meta and More, Chair Lina Khan Says.” ABC News , May 5, 2024. https://abcnews.go.com/Politics/ftc-started-takes-amazon-meta-chair-lina-khan/story?id=109928219. “Antitrust Law Basics – Section 2 of the Sherman Act.” Thomas Reuters. Last modified May 17, 2023. https://legal.thomsonreuters.com/blog/antitrust-law-basics-section-2-of-the-sherman-act/. “The Antitrust Laws.” U.S. Department of Justice. Accessed December 20, 2023. https://www.justice.gov/atr/antitrust-laws-and-you#:~:text=The%20Sherman%20Antitrust%20Act,or%20markets%2C%20are%20criminal%20violations. Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 JUSTIA (10th Cir. June 19, 1985). https://supreme.justia.com/cases/federal/us/472/585/. “A Brief Overview of the ‘New Brandeis’ School of Antitrust Law.” Patterson Belknap. Last modified November 8, 2018. https://www.pbwt.com/antitrust-update-blog/a-brief-overview-of-the-new-brandeis-school-of-antitrust-law. Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 JUSTIA (4th Cir. Mar. 29, 1993). https://supreme.justia.com/cases/federal/us/509/209/. “Competition and Monopoly: Single-Firm Conduct under Section 2 of the Sherman Act : Chapter 1.” U.S. Department of Justice. https://www.justice.gov/archives/atr/competition-and-monopoly-single-firm-conduct-under-section-2-sherman-act-chapter-1#:~:text=Section%202%20of%20the%20Sherman%20Act%20makes%20it%20unlawful%20for,foreign%20nations%20.%20.%20.%20.%22. “Court Rejects FTC’s Bid to Block Meta’s Proposed Acquisition of VR Fitness App Developer.” Crowell. https://www.crowell.com/en/insights/client-alerts/court-rejects-ftcs-bid-to-block-metas-proposed-acquisition-of-vr-fitness-app-developer. “Federal Trade Commission and Justice Department Release 2023 Merger Guidelines.” Federal Trade Commission. Accessed December 18, 2023. https://www.ftc.gov/news-events/news/press-releases/2023/12/federal-trade-commission-justice-department-release-2023-merger-guidelines. Hovenkamp, Herbert. “Framing the Chicago School of Antitrust Analysis.” University of Pennsylvania Carey Law School 168, no. 7 (2020). https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=3115&context=faculty_scholarship. Hovenkamp, Herbert J. “The Rule of Reason.” Penn Carey Law: Legal Scholarship Repositary , 2018. https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=2780&context=faculty_scholarship. Jones, Callum. “‘She’s Going to Prevail’: FTC Head Lina Khan Is Fighting for an Anti-monopoly America.” The Guardian , March 9, 2024. https://www.theguardian.com/us-news/2024/mar/09/lina-khan-federal-trade-commission-antitrust-monopolies. Katz, Ariel. “The Chicago School and the Forgotten Political Dimension of Antitrust Law.” The University of Chicago Law Review , 2020. https://lawreview.uchicago.edu/print-archive/chicago-school-and-forgotten-political-dimension-antitrust-law. Khan, Lina. “Amazon’s Antitrust Paradox.” The Yale Law Journal 126, no. 3 (2017). https://www.yalelawjournal.org/note/amazons-antitrust-paradox. Khan, Lina. “The Ideological Roots of America’s Market Power Problem.” The Yale Law Journal 127 (June 4, 2018). https://www.yalelawjournal.org/forum/the-ideological-roots-of-americas-market-power-problem. Khan, Lina. “The New Brandeis Movement: America’s Antimonopoly Debate.” Journal of European Competition Law & Practice 9, no. 3 (2018): 131-32. https://doi.org/10.1093/jeclap/lpy020. Koenig, Bryan. “DOJ Has a Long Set to Play against Live Nation-Ticketmaster.” Law360. Last modified May 23, 2024. https://www.crowell.com/a/web/4TwXzF6sFW49adb3eTjznR/doj-has-a-long-set-to-play-against-live-nation-ticketmaster.pdf. Layton, Roslyn. “Live Nation's Anticompetitive Conduct Is a Problem for Security.” ProMarket. Last modified June 25, 2024. https://www.promarket.org/2024/06/25/live-nations-anticompetitive-conduct-is-a-problem-for-security/. Levine, Jay L. “1990s to the Present: The Chicago School and Antitrust Enforcement.” Porterwright. Last modified June 1, 2021. https://www.antitrustlawsource.com/2021/06/1990s-to-the-present-the-chicago-school-and-antitrust-enforcement/. Markham, William. “How the Consumer-Welfare Standard Transformed Classical Antitrust Law.” Law Offices of William Markham, P.C. Last modified 2021. https://www.markhamlawfirm.com/wp-content/uploads/2023/06/How-the-Consumer-Welfare-Standard-Transformed-Classical-Antitrust-Law.final_.pdf. McKenna, Francine. “What Made the Chicago School so Influential in Antitrust Policy?” Chicago Booth Review. Last modified August 7, 2023. https://www.chicagobooth.edu/review/what-made-chicago-school-so-influential-antitrust-policy. Office of Public Affairs - U.S. Department of Justice. “Justice Department Sues Live Nation-Ticketmaster for Monopolizing Markets across the Live Concert Industry.” News release. March 23, 2024. https://www.justice.gov/opa/pr/justice-department-sues-live-nation-ticketmaster-monopolizing-markets-across-live-concert. “Sherman Antitrust Act.” Britannica. Accessed August 5, 2024. https://www.britannica.com/biography/John-Sherman. “Sherman Anti-Trust Act (1890).” National Archives. https://www.archives.gov/milestone-documents/sherman-anti-trust-act. “The Ticketmaster/LiveNation Merger: What Does It Mean for Consumers and the Future of the Concert Business?: Hearings Before the Committee on the Judiciary, Subcommittee on Antitrust, Competition Policy and Consumer Rights (2009) (statement of David A. Balto). https://www.judiciary.senate.gov/imo/media/doc/balto_testimony_02_24_09.pdf. Treisman, Rachel. “Taylor Swift Says Her Team Was Assured Ticket Demands Would Be Met for Her Eras Tour.” npr. Last modified November 18, 2022. https://www.npr.org/2022/11/17/1137465465/taylor-swift-ticketmaster-klobuchar-tennessee. United States v. Microsoft Corp., 584 JUSTIA (Apr. 17, 2018). https://supreme.justia.com/cases/federal/us/584/17-2/. “U.S. v. Microsoft: Court’s Findings of Fact.” U.S. Department of Justice. https://www.justice.gov/atr/us-v-microsoft-courts-findings-fact. Varney, Christine A. “The TicketMaster/Live Nation Merger Review and Consent Decree in Perspective.” Speech presented at South by Southwest, March 18, 2010. U.S. Department of Justice. Last modified March 18, 2010. https://www.justice.gov/atr/speech/ticketmasterlive-nation-merger-review-and-consent-decree-perspective. Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, 540 JUSTIA (Oct. 2003). https://supreme.justia.com/cases/federal/us/540/398/.

  • Burden of Innocence | brownjppe

    The Burden of Innocence: Arendt’s Understanding of Totalitarianism through its Victims Elena Muglia Author Emerson Rhodes Meruka Vyas Editors Hannah Arendt set out to describe an ideology and government that burst past understandings of politics, morality, and the law asunder. In Origins of Totalitarianism , Arendt argues that totalitarianism could not fit into previous political typologies. Instead, it navigates between definitions of political regimes like tyranny and authoritarianism, as well as distinctions historically made between lawlessness and lawfulness, arbitrary and legitimate power. Even then, Arendt holds on to the idea that totalitarianism can be described and analyzed despite escaping traditional understanding as a political ideology and system. In the preface of the first edition, Arendt expresses this hope, writing that Origins was: “Written out of the conviction that it should be possible to discover the hidden mechanics by which all traditional elements of our political and spiritual world were dissolved into a conglomeration where everything seems to have lost specific value and has become unrecognizable for human comprehension, unusable for human purpose.” One of the traditional elements of our “political and spiritual” world that she inquires about are questions of innocence, guilt, and responsibility. How can these concepts, which have both moral and legal implications, be applied and understood in the case of Nazi Germany, a regime void of morality and legality? Many political theorists have explored Arendt’s understanding of guilt in her report Eichmann in Jerusalem . In the report, Arendt utilizes Adolf Eichmann’s case—a Nazi Party official who helped carry out the Final Solution—to provide a concrete example of someone who is guilty but does not fit traditional understandings of what is required to be criminally guilty. Alan Norrie points out that Arendt exposes the tension between Eichmann’s lack of criminal intent, mens rea , and his criminal and evil actions (Norrie 2008. 202). The totality of totalitarianism complicates his criminal guilt, as Nazi Germany rendered every member of society complicit in its crimes. To unpack this complex nexus of guilt and responsibility, Iris Young looks at two of Arendt’s essays; “Organized Guilt and Universal Responsibility” and “Collective Responsibility” (Young 2011, 90). Young outlines how Arendt understands guilt as centered on the self, while responsibility implies a relationship with the world and membership in a political community (Young 2011, 78). Guilt arises from an objective consequence of somebody’s actions (Young 2011, 79) and is not a product of someone’s subjective state. With this understanding, everybody in Nazi Germany was responsible (irrespective of whether they took up political responsibility), but not everybody was guilty. Those who acted publicly against the Nazi Regime, like the Scholl siblings, took up political responsibility in a positive sense (Young 2011, 91). Richard Bernstein, who also discusses Eichmann, shares this understanding with Young—Eichmann is criminally guilty, but bystanders are not. Bernstein, however, elucidates that the bystanders’ responsibility is imperative to understand because their complicity was an “essential condition for carrying out the Final Solution” (Bernstein 1999, 165). By focusing on the areas of guilt and responsibility and primarily looking at Eichmann, however, these scholars leave a theoretical gap in understanding the relationship between the victims—the stateless and Jewish people for Nazi Germany—and totalitarian ideology. These groups lack political responsibility within the totalitarian system because their innocence implies a separation from the world and a political community. In her essay “Collective Responsibility,” Arendt notes that the twentieth century has created a category of men who “cannot be held politically responsible for anything” and are “absolutely innocent.” The innocence of these victims and their apoliticality strikes at the heart of why Arendt postulates that totalitarian ideology and terror constitute a novel form of government—“[it] differs essentially from other forms of political oppression known to us such as despotism, tyranny and dictatorship.” Totalitarianism targets victims en masse , but their status as victims is not based on any action they take against the regime. While Norrie, Young, and Bernstein all address that Arendt thinks that any “traditional” conception of the relationship between law and justice cannot be applied to totalitarianism directly, by focusing primarily on Eichmann, they are missing and understanding of a group of people that allowed totalitarianism to explode these notions. By tracking and parsing through Arendt’s understanding of the innocents and innocence in Origins of Totalitarianism and placing it in conversation with her understanding of action in The Human Condition, I elaborate on the unique and lack thereof, political relationship between totalitarian ideology and the innocents. I argue that the condition of innocence of the victims represents the essence of totalitarianism’s unique form of oppression and negation of the human condition. The positioning of the innocents in a totalitarian society acts as a lens for how totalitarianism aims to reshape traditional notions of political, moral, and legal personhood. I demonstrate this by first outlining what created fertile ground in the 20th century for the condition of rightlessness of the innocents. Second, I highlight how the targeting of innocents in concentration camps lies at the heart of totalitarianism’s destruction of the juridical person—someone who is judged based on their actions. Third, I argue that by bending any notions of justice, totalitarianism destroys the moral person, a destruction that is best expressed in the innocents’ lack of internal freedom. Finally, I argue that all these components entail severing the victims from a world where they can appear and be recognized as humans. Overall, I contend that while many of the techniques unleashed on the innocents apply, to an extent, to everyone under totalitarianism, including people like Eichmann, the innocents represent the full realization of totalitarianism’ attempt to alter the essence of a political and acting person. To understand how totalitarian regimes created a mass of ‘superfluous’ people who existed outside the political realm, it is first necessary to highlight what conditions Arendt thinks sowed fertile ground for totalitarian domination and terror in the first place. A crucial condition is rooted in the failures of the nation-state in dealing with the new category of stateless people in the interwar period in Europe. Following WWI, multiethnic empires, like the Austro-Hungarian and Ottoman empires, dissolved, which led Europe to resort to the familiar nation-state principle—presuming that each nationality should establish its state. As Ayten Gundogdu writes, “the unquestioning application of this principle turned all those who were ‘ejected from the old trinity of state-people-territory’ into exceptions to the norm” (Gundogdu 2014, 31). These exceptions to the norm, as were Jewish people, could not be repatriated anywhere because they did not have a nation. Instead of integrating these minorities and making them fully-fledged political members, policies like Minority Treaties codified minorities as exceptions to the law. The massive scale of refugees that existed outside a political community left a set of people without any protections apart from the ones that the state gave out of their own prerogative and charitable actions. This stateless crisis crystallized, for Arendt, the aporia of human rights—even though human rights guarantee universal rights, irrespective of any social and political category, they are enforced based on political membership. Human rights end up being the rights of citizens, leading the stateless to a condition of “absolute rightlessness.” This condition of rightlessness does not entail the loss of singular rights—just like the law temporarily deprives a criminal of the right to freedom—but a deprivation of what Arendt calls the right to have rights. Defined by Arendt as a right to live “in a framework where one is judged by one’s actions and opinions.” Instead of being judged based on actions or opinions, the stateless are judged based on belonging to a group outside the nation. This innocence, an inability to be judged based on one’s deeds and words, is the defining mark of the statelessness’ loss of a “political status” (Arendt 1951, 386), which primes these groups of people for the particular form of oppression that totalitarianism entails. While the stateless and their condition of rightlessness was constructed even before Nazi Germany, the existence and the continuous creation of a mass of innocents lies at the core of the raison d’étre of totalitarian politics. According to Arendt, totalitarianism operates based on a law of Nature and History, which has “mankind” as an end product, an “‘Aryan’ world empire” for Hitler. Mankind becomes the “embodiment” of law and justice. Jewish people, under Nazi Germany, are portrayed as the “objective enemy” halting nature’s progression, whereby every stage of terror is seen as a further development that is closer to achieving the development of the ultimate human. This continuous need to follow a Darwinian law of nature leads Arendt to define one of totalitarianism’ defining features as the law of movement: the only way that totalitarian regimes can justify their existence, expansion, and domination, and it relies almost entirely on the group of innocents. The innocents are crucial components of the concentration camps because they are placed there alongside criminals who have committed an action. If they only targeted “criminals” or those that committed particular actions, the Nazi party would have scant logic to fulfill its law of movement. The “innocents” are “both qualitatively and quantitatively the most essential category of the camp population.” in the sense that they exist in an “enormous” capacity and will always be present in society. Totalitarianism relies on innocents because their existence removes any “calculable punishment for definite offenses.” Totalitarian politics aim, eventually, to turn everyone into an innocent mass that could be targeted, not because of their actions, but their existence. Even criminals were often sent to concentration camps only after they had completed their prison sentences, meaning they were going there not because of their criminal activity but rather arbitrarily, sacrificing a mass in favor of the laws of history and nature. The condition of rightlessness combined with total domination, exerted through the concentration camps, obliterates the juridical person for all the victims of totalitarianism. The juridical person is the foundation of modern understandings of law, constituting a person who bears rights and can exercise rights and who, in derogation of the law, faces proportional and predictable consequences. By destroying the juridical person and turning its victims into a mass of people who exist outside any legal framework and logic, totalitarianism operates beyond any previously conceived notions of justice. As Arendt explains: “The one thing that cannot be reproduced [in a totalitarian regime] is what made the traditional conceptions of Hell tolerable to man: the Last Judgment, the idea of an absolute standard of justice combined with the infinite possibility of grace. For in the human estimation, there is no crime and no sin commensurable with the everlasting torments of Hell. Hence the discomfiture of common sense, which asks: What crime must these people have committed in order to suffer so inhumanly? Hence also the absolute innocence of the victims: no man ever deserved this. Hence finally the grotesque haphazardness with which concentration camp victims were chosen in the perfected terror state: such punishment can, with equal justice and injustice, be inflicted on anyone .” By “traditional conceptions of Hell” tolerable to man, Arendt means a Hell where every individual will be judged based on their actions and nothing else on the day of the Last Judgment. Totalitarianism shatters this idea and any existence of an “absolute standard of justice” through the concentration camps, which creates Hell on earth but without any rightful last judgment. Even more importantly, because of these innocents and the arbitrariness and “haphazardness” of the way they are chosen, Arendt explains that state punishment can be “inflicted on anyone.” A tyranny targets the opponents of a regime or anyone who causes disorder, but totalitarianism cannot be understood through such a utilitarian lens. As Arendt points out in various places in Origins , without understanding totalitarianism’ “anti-utilitarian behavior.” it is difficult and impossible to understand its use in targeting people who commit no specific action against the regime. Concentration camps and terror materialize the law of movement like positive law materializes notions of justice in lawful governments. The guilty are innocents who stand in the way of movement. Totalitarianism does not only operate outside any traditional forms of legality and juridical personhood but also transcends any understanding of morality—the moral person is destroyed just as the juridical one is; and this is, once again, fully expressed through the treatment of innocents who become the ideal subject of totalitarianism. The ideal subject of totalitarianism lacks both internal and external freedom—which is precisely what is imposed on the victims. A lack of internal freedom implies an inability to distinguish right and wrong. As Arendt explains, “totalitarian terror,” in the concentration camps, achieves triumph when it cuts the moral person from “the individualist escape and in making the decisions of conscience questionable and equivocal.” The Nazi Regime achieved this by asking the innocent to make impossible decisions that involved balancing their own life and the ones of their families. This often involved a blurring of “the murderer and his victim.” by involving even the concentration camp inmates in the operations of the camp. Concerning this, Robert Braun talks about Primo Levi’s discussion of the complicated victim—explaining that those who survived the concentration camps are always seen as suspect because of these blurred lines (Braun 1994, 186). Arendt has a parallel opinion to Levi that focuses more on those victim’s subjective state, explaining that when they return to the “word of the living,” they are “assailed by doubts” regarding their truthfulness. The innocents represent the perfect totalitarian subject as their doubts represent an inability to distinguish between truth and falsehood, which Arendt describes as the “standards of thought.” What is most striking about the destabilization of conscience is that it results in an inability to a freezing effect and an inability to act. As Arendt explains, “Through the creation of conditions under which conscience ceases to be adequate and to do good becomes utterly impossible, the consciously organized complicity of all men in the crimes of totalitarian regimes is extended to the victims and thus made really total.” Regardless of what “good” entails, doing it entails committing an action that is for others. Doing good can be understood as analogous to how Young interprets Arendt’s understanding of political responsibility… further explaining how the victims are left to a condition of non-responsibility through their inability to both distinguish what is right and wrong, and act on it. The erasure of “acting” in totalitarianism gains new meaning, or rather a more comprehensive explanation, when looking at Arendt’s discussion of acting in The Human Condition. Arendt’s work in The Human Condition illuminates the full extent of why acting becomes impossible under totalitarianism, especially for its victims. As Nica Siegel explains, an essential aspect of her understanding of action in The Human Condition is the spatialized logic that grounds action in a space where one can “reveal their unique personal identities and make their appearance in the world.” Only in this way can an action take place as it has a “who”—a unique author—at its root, and thus has the potential to create new beginnings. With this understanding, totalitarianism is the antithesis of action for everyone, to an extent, but completely for the innocent. Totalitarianism removes their space to act internally—through the destruction of conscience explained in the previous section—and externally—removing any place to appear publicly. The innocent are removed from the rest simply by being in the concentration camps, isolated from everyone else but also from one another. This means that totalitarianism, in practice, removes any source and space for spontaneity. Arendt defines spontaneity in Origins almost identically to how she defines action in The Human Condition , saying that spontaneity is “man’s power to begin something new out of his resources, something that cannot be explained on the basis of reactions to environment or events.” This condition of the innocent also illuminates why creating new and making a political statement is impossible under totalitarianism. As Arendt explains, “no activity can become excellent if the world does not provide a proper space for its exercise.” As with many other tactics in totalitarianism, this lack of excellence and new beginnings is rooted in the fate of the innocents. Nobody’s actions can “become excellent” if they face the same consequences of the concentration camp as the mass of those who commit no action. This is why under totalitarianism, “martyrdom” becomes “impossible.” Just as totalitarianism assimilates criminals with innocents in their punishment, political actors are also assimilated to this category, as they are “deprived of the protective distinction that comes of their having done something,” just as the innocents are. What totalitarianism does to its victims is, therefore, a symptom of its wider perversion of human individuality and action in general. Even perpetrators like Eichmann lose their sense of individuality—A.J. Vetlesen has described the phenomenon as a double dehumanization between the victims and the perpetrator Every bureaucrat in Nazi Germany was replaceable and totalitarianism made them feel, paradoxically, “subjectively innocent,” in the sense that they do not feel responsible for their actions “because they do not really murder but execute a death sentence pronounced by some higher tribunal.” Jalusic argues that both aspects of humanization have in common, the “loss of the human condition.”, but what Jalusic misses is that Vetlesen, by arguing that it is the persecutors that dehumanize themselves to avoid personal responsibility and alienate themselves from their actions—thus going against the cog in the machine theory. The perpetrators retain a level of agency that is ultimately denied to the victims. The victims do not alienate themselves from their actions, as they cannot act in the first place. When Nazi officials send victims to the concentration camp, they lose any ability to appear and thus face a loss of the human condition, as Arendt describes in The Human Condition, “A life without speech and without action, on the other hand-and this is the only way of life that in earnest has renounced all appearance and all vanity in the biblical sense of the word-is, literally dead to the world; it has ceased to be a human life because it is no longer lived among men” The emphasis she places on action as being an essential part of living “among men” explains why, according to her, totalitarianism, unlike other forms of oppressive governments, transforms “human nature itself.” While she uses the term “human nature,” she makes a strict distinction between human nature and condition in The Human Condition , arguing that it is impossible for us to understand human nature without resorting to God or a deity. Even in Origins , when talking about human nature, she criticizes those, like the positivists, who see it as something fixed and not constantly conditioned by ourselves. In light of her understanding of the human condition, I argue that Arendt means that totalitarianism undermines an essential part of the human condition, not human nature. Arendt views the human condition, as opposed to human nature, as being rooted in plurality. By plurality, she means that each individual is uniquely different but also shares a means of communication with every other individual, and thus, the ability of each individual to make themselves known and engage with one another. With this in mind, “human plurality is the basic condition for both action and speech,” as each individual can make a statement and be understood by others. The treatment of victims and their innocence as their defining factor highlights that fellow humans can distort and condition crucial aspects of our human condition in favor of laws that pretend that humans can instill justice and nature on earth. To a degree, totalitarianism subjects everyone to the conditions of “innocence” that victims face. What distinguishes the victims from other agents under totalitarianism is that they demonstrate the ability of totalitarian ideology to instill a complete condition of innocence by playing a person entirely outside any political and legal realm and, by extension, outside of mankind. Innocence under totalitarianism is not a negative condition—in the sense of not having done anything, not taking action—but it is primarily a lack of positive freedom—the ability to do something and act. Arendt’s understanding of innocence elaborates on the unique condition of superfluousness under totalitarianism. This ‘superfluousness’ is justified through a legal and political doctrine that explodes past legal and normative frameworks by being based on movement instead of stability. The law of nature is in a constant process of Darwinian development, with the superfluous innocents as the sine qua non to keep going. A lot of what happens to the innocents, as their obliteration of a space to act, does happen to everyone under totalitarianism; however, the innocents bear the full expression of totalitarianism and fight past notions of moral, political, and legal personhood. The innocents are not only cut off from this personhood but also from what Arendt thinks it means to be human, as they represent an inability to do what human beings do, which is to create beginnings through spontaneous action. The unique condition of innocence that the victims of totalitarianism face exposes totalitarianism’s own legal and political theory. The Law of Nature that Nazi Germany espouses here cannot exist without the realization of a group of innocents who prove the nihilistic idea that humans can be sacrificed for perfected mankind. As Arendt explains, the concentration camps are where the changes in “human nature are tested.” We can only understand how totalitarianism could occur by looking at this unique political erasure. The terror and fate of the innocents act as proof for everyone in the totalitarian regime that they could be next. The status of the victims also sheds lights on the inexplicable deeds that Eichmann committed, as Arendt writes that one of the few, if not only one, discernible aspects of totalitarianism is that “radical evil has emerged in connection with a system in which all men have become equally superfluous.” Totalitarianism proves that it is fellow humans who are dehumanized, albeit to a different degree, who completely sever an individual’s ties from political and legal structures meant to protect them. This conclusion and elaboration of the peculiar form of oppression and domination of totalitarianism has pressing practical and theoretical implications for modern-day politics. As Arendt explains, totalitarianism is born from modern conditions, and so looking at how modern polities can and do create superfluousness can be a thermometer for descent into totalitarianism. After all, it is important to remember that statelessness in the 20th century came before totalitarianism’s domination and terror. References Arendt, Hannah. “Collective Responsibility.” Amor Mundi: Explorations in the Faith and Thought of Hannah Arendt , edited by S. J. James W. Bernauer, Springer Netherlands, 1987, pp. 43–50. Springer Link , https://doi.org/10.1007/978-94-009-3565-5_3. ---. Eichmann in Jerusalem: A Report on the Banality of Evil . Penguin Books, 2006. ---. The Human Condition: Second Edition . Edited by Margaret Canovan and a New Foreword by Danielle Allen, University of Chicago Press. University of Chicago Press , https://press.uchicago.edu/ucp/books/book/chicago/H/bo29137972.html. Accessed 8 May 2024. ---. The Origins of Totalitarianism . 1951. Penguin Classics, 2017. Benhabib, Seyla. “Judgment and the Moral Foundations of Politics in Arendt’s Thought.” Political Theory , vol. 16, no. 1, 1988, pp. 29–51. JSTOR , https://www.jstor.org/stable/191646. Bernstein, Richard J. “Responsibility, Judging, and Evil.” Revue Internationale de Philosophie , vol. 53, no. 208 (2), 1999, pp. 155–72. JSTOR , https://www.jstor.org/stable/23955549. Braun, Robert. “The Holocaust and Problems of Historical Representation.” History and Theory , vol. 33, no. 2, May 1994, p. 172. DOI.org (Crossref) , https://doi.org/10.2307/2505383. Gundogdu, Ayten. Rightlessness in an Age of Rights . Oxford University Press, 2015. DOI.org (Crossref) , https://doi.org/10.1093/acprof:oso/9780199370412.001.0001. Jalusic, Vlasta. “Organized Innocence and Exclusion: ‘Nation-States’ in the Aftermath of War and Collective Crime.” Social Research , vol. 74, no. 4, 2007, pp. 1173–200. JSTOR , https://www.jstor.org/stable/40972045. Norrie, Alan. “Justice on the Slaughter-Bench: The Problem of War Guilt in Arendt and Jaspers.” New Criminal Law Review , vol. 11, no. 2, Apr. 2008, pp. 187–231. DOI.org (Crossref) , https://doi.org/10.1525/nclr.2008.11.2.187. Siegel, Nica. “The Roots of Crisis: Interrupting Arendt’s Radical Critique.” Theoria: A Journal of Social and Political Theory , vol. 62, no. 144, 2015, pp. 60–79. JSTOR , https://www.jstor.org/stable/24719945. Vetlesen, Arne Johan. Evil and Human Agency: Understanding Collective Evildoing . 1st ed., Cambridge University Press, 2005. DOI.org (Crossref) , https://doi.org/10.1017/CBO9780511610776. Young, Iris Marion, and Martha Nussbaum. Responsibility for Justice . Oxford University Press, 2011. DOI.org (Crossref) , https://doi.org/10.1093/acprof:oso/9780195392388.001.0001.

  • Ronald Reagan and the Role of Humor in American Movement Conservatism

    Author Name < Back Ronald Reagan and the Role of Humor in American Movement Conservatism Abie Rohrig In this paper, I argue that analysis of Reagan’s rhetoric, and particularly his humor, illuminates many of the attitudes and tendencies of both conservative fusionism—the combination of traditionalist conservatism with libertarianism—and movement conservatism. Drawing on Ted Cohen’s writings on the conditionality of humor, I assert that Reagan’s use of humor reflected two guiding principles of movement conservatism that distinguish it from other iterations of conservatism: its accessibility and its empowering message. First, Reagan’s jokes were accessible in that they are funny even to those who disagree with him politically; in Cohen’s terms, his jokes were hermetic (requiring a certain knowledge to be funny), and not effective (requiring a certain feeling or disposition to be funny). The broad accessibility of Reagan’s humor reflected the need of movement conservatism to unify constituencies with varying political feelings and interests. Second, Reagan’s jokes were empowering—they presume and therefore posit the competence of their audience. Many of his jokes implied that if an average citizen were in charge of the government they could do a far better job than status quo bureaucrats. This tone demonstrated the tendency of movement conservatism to emphasize individual freedom and self-governance as a through line of its constituent ideologies. In the first part of this paper, I offer some historical and political context for movement conservatism, emphasizing the ideological influences of Frank Meyer and William F. Buckley as well as the political influence of Barry Goldwater. I then discuss how Reagan infused many of Meyer, Buckley, and Goldwater’s talking points with a humor that is both accessible and empowering. I will conclude by analyzing how Reagan’s humor was a concrete manifestation of certain principles of fusionism. Post-war conservatives found themselves in a peculiar situation: their school of thought had varying constituencies, each with different political priorities and anxieties. George Nash writes in The Conservative Intellectual Movement Since 1945 : “The Right consisted of three loosely related groups: traditionalists or new conservatives, appalled by the erosion of values and the emergence of a secular, rootless, mass society; libertarians, apprehensive about the threat of the State to private enterprise and individualism; and disillusioned ex-radicals and their allies, alarmed by international Communism” (p. 118). Conservative intellectuals like Frank Meyer and William F. Buckley attempted to synthesize conservative schools of thought into a coherent modern Right. In 1964, Meyer published What is Conservatism? , an anthology of conservative essays that highlight the similarities between different conservative schools of thought. Buckley founded the National Review , a conservative magazine that published conservatives of all three persuasions. Its Mission Statement simultaneously appeals to the abandonment of “organic moral order,” the indispensability of a “competitive price system,” and the “satanic utopianism” of communism. 2 Both Meyer and Buckley thought that the primacy of the individual was an ideological belief through the line of traditionalism and libertarianism. Meyer wrote in What is Conservatism? that “the freedom of the person” should be “decisive concern of political action and political theory.” 3 Russell Kirk, a traditionalist-leaning conservative, similarly argued that the libertarian imperative of individual freedom is compatible with the “Christian conception of the individual as flawed in mind and will” because religious virtue “cannot be legislated,” meaning that freedom and virtue can be practiced and developed together. 4 The cultivation of the maximum amount of freedom that is compatible with traditional order thus became central to fusionist thought. Barry Goldwater, a senator from Arizona and the 1964 Republican nominee for president, championed the hybrid conservatism of Buckley and Meyer. Like Buckley in his Mission Statement, Goldwater’s acceptance speech at the Republican National Convention included a compound message in support of “a free and competitive economy,” “moral leadership” that “looks beyond material success for the inner meaning of [our] lives,” and the fight against communism as the “principal disturber of peace in the world.” 5 Goldwater also emphasized the fusionist freedom-order balance, contending that while the “single resolve” of the Republican party is freedom, “liberty lacking order” would become “the license of the mob and of the jungle.” 6 Having discussed the ideological underpinnings of conservative fusionism, I turn now to an analysis of how Reagan used humor as a tool for political framing. First, Reagan’s humor is distinctive for its accessibility: by this I mean that there are few barriers one must overcome to laugh at Reagan’s jokes. In his book Jokes: Philosophical Thoughts on Joking Matters , philosopher Ted Cohen calls jokes “conditional” if they presume that “their audiences [are] able to supply a requisite background, and exploit this background.” 7 The conditionality of a joke varies according to how much background it requires to be funny. In Cohen’s terms, Reagan’s jokes are not very conditional since many different audiences can appreciate their content. Cohen presents another distinction that is useful for analyzing Reagan’s humor: a joke is hermetic if the audience’s “background condition involves knowledge,” and it is affective if it “depends upon feelings … likes, dislikes and preferences” of the audience). Reagan’s jokes are not very conditional because they are at most hermetic, merely requiring some background knowledge to be appreciated— not a certain feeling or disposition— and that this makes his jokes funny even to people who disagree with him. There are two ways in which Reagan’s humor is accessible. The first is that many of his jokes have apolitical premises. By apolitical, I mean that the requisite knowledge required to make a joke funny does not directly relate to government or public affairs. For instance, Reagan said at the 1988 Republican National Convention, “I can still remember my first Republican Convention. Abraham Lincoln giving a speech that sent tingles down my spine.” To appreciate this joke, one only needs to know that Reagan is the oldest president to even hold office. This piece of knowledge does not pertain to the government in any direct way— in fact, this joke would remain funny even if it were told by a different person at a nonpolitical conference with a reference to a nonpolitical historical figure. Another example of Reagan’s apolitical humor is a joke he made in the summer of 1981: “I have left orders to be awakened at any time in case of national emergency, even if I'm in a cabinet meeting.” All one needs to understand here is that long meetings are often boring and sleep-inducing. One can even love long meetings and still find this joke funny because they understand the phenomenon of a boring, sleep-inducing meeting. Reagan made hundreds of these jokes during his time in office, all of which were, with few exceptions, funny to just about any listener. Their apolitical content ensured that no one political constituency would be unable to “get” Reagan’s jokes. The second way in which Reagan’s humor is hermetic is that his political jokes were playful and had relatively innocuous premises, meaning that one did not have to agree with their sentiment to laugh. Reagan’s political jokes can be differentiated from his apolitical jokes because they do require knowledge about government or public affairs in order to be funny. One such piece of knowledge is the inefficiency of government bureaucracy. For example, in his speech, “A Time for Choosing,” Reagan says that “the nearest thing to eternal life we will ever see on this Earth is a government program.” In another speech, Reagan quips, “I have wondered at times about what the Ten Commandments would have looked like if Moses had run them through the U.S. Congress.” The premises of these jokes, though political, are not very contentious. To find them funny one simply needs to know that bureaucracy can be inefficient, or even that there exists a sort of joke in which bureaucracies are teased for being inefficient; one does not need to hate bureaucracy or even want to reduce bureaucracy. Cohen might offer the following analogy to explain the conditionality of Reagan’s bureaucracy jokes: one does not need to think that Polish people are actually stupid to laugh at a Polish joke, one simply needs to understand that there exists a sort of joke in which Polish people are held to be stupid. Reagan’s inoffensive political jokes are playful, lighthearted, and careful not to alienate or antagonize the opposition by presuming a controversial belief. The accessibility of Reagan’s humor reflects the overall need for fusionism to appeal to a wide variety of conservative groups— traditionalists, libertarians and anti-communists. Instead of converting libertarians to traditionalism or vice versa, Nash writes that fusionists looked to foster agreement on “several fundamentals” of conservative thought. Reagan’s broadly accessible humor is both a concretization and a strategy for fusionism’s broadly accessible ideology. The strategic potency of Reagan’s humor lies in its ability to bond people together. Cohen writes that the “deep satisfaction in successful joke transactions is the sense held mutually by teller and hearer that they are joined in feeling.” Friedrich Nietzsche expresses a similar sentiment when he writes that “rejoicing in our joy, not suffering over our suffering, makes someone a friend.” This joint feeling brings people together even more than a shared belief since the moment of connection is more visceral and immediate. One might ask, however; is it not the case that all politicians value humor as a means to connect with their audience and unify their constituencies? Why is Reagan’s humor any different? While humor can be used for a broader range of political goals, politicians often connect with one group at the expense of another. For example, when asked what she would tell a male supporter who believed marriage was between one man and one woman, Senator Elizabeth Warren responded, “just marry one woman. I'm cool with that— assuming you can find one.” 9 Some democrats praised this joke for its dismissal of homophobic beliefs, but others felt that the joke was condescending and antagonistic. This is the sort of divisive joke that Reagan was uninterested in— one that pleases one of his constituencies at the expense of another. Reagan would also avoid much of Donald Trump’s humor. For instance, Trump wrote in 2016, “I refuse to call Megyn Kelly a bimbo, because that would not be politically correct. Instead I will only call her a lightweight reporter!” Trump’s dismissal of “political correctness” is liberating to some but offensive to others. By contrast, Reagan’s exoteric style of humor welcomes all the constituencies of conservative fusion. Nash writes that fusionists were “tired of factional feuding,” and thus Reagan had no motivation to drive a larger wedge between traditionalists and libertarians. 1 The second thing to note about Reagan’s humor is its empowering tone. This takes two forms. First, Reagan elevates his audience by implying that if they controlled the government, they could do a far better job, a message which presumes and therefore posits their competence. For instance, in “A Time For Choosing,” Reagan argues that one complicated anti-poverty program could be made more effective by simply sending cash directly to families. In doing so, Reagan suggests that if any given audience member were in charge of the program, they could do a better job than the bureaucrats. Second, Reagan’s insistence on limited government affirms the average citizen’s capacity for self-government. Reagan famously states that “the nine most terrifying words in the English language are, ‘I’m from the government and I’m here to help.’” Since this implies that government aid will leave you worse off, it also posits the average citizen’s capacity for autonomy and therefore their maturity, level-headedness, and overall competence. The empowering tone of Reagan’s humor reflects fusionism’s emphasis on individual freedom and independence. Meyer writes that “the desecration of the image of man, the attack alike upon his freedom and his transcendent dignity, provide common cause” for both traditionalists and libertarians against liberals. Yet, a presupposition of a belief in freedom is a belief in people’s faculty to be free, to not squander their freedom on pointless endeavors or let their freedom collapse into chaos. This freedom-order balance is fundamental to fusionism as an ideology that straddles support from libertarians who want as little government intervention as possible with traditionalists who want the state to maintain certain societal values. By positing the competence of the free individual in his jokes, Reagan affirms Russell Kirk’s idea that moral order will arise organically from individual freedom, not government coercion. In this paper, I argue that one of Reagan’s marks on the development of conservative thought was his careful use of humor to reflect certain ideological and practical commitments of post-war fusionism. By making his jokes accessible to the varying schools of conservatism and propounding the capacity of the individual for self-government, Reagan’s humor functioned as both a manifestation and a strategy for fusionism’s post-war triumph. References “A Selected Quote From: The President’s News Conference, August 12, 1986.” August 12, 1986 Reagan Quotes and Speeches. Ronald Reagan Presidential Foundation & Institute. Accessed August 6, 2022. https://www.reaganfoundation.org/ronald-reagan/reagan-quotes-speeches/news-conference-1/ . Buckley Jr., William F. "Our Mission Statement." National Review 19 (1955). Campbell, Colin. 2016. “Donald Trump Announces to the World That He Won’t Call Megyn Kelly a ‘Bimbo.’” Insider . January 27, 2016. https://www.businessinsider.com/donald-trump-fox-news-debate-megyn-kelly-bimbo-2016-1 . Cohen, Ted. Jokes: Philosophical Thoughts on Joking Matters . Chicago: University of Chicago Press, 1999. “‘George - Make It One More for the Gipper.’” The Independent. August 16, 1998. https://www.independent.co.uk/arts-entertainment/george-make-it-one-more-for-the-gipper-1172284.html . “Goldwater’s 1964 Acceptance Speech.” Washington Post. Last Modified 1998. https://www.washingtonpost.com/wp-srv/politics/daily/may98/goldwaterspeech.htm . Harris, Daniel I. "Friendship as Shared Joy in Nietzsche." Symposium 19, no. 1, (2015): 199-221. Meyer, Frank S., ed. What is Conservatism? Intercollegiate Studies Institute, 2015. Open Road Media. Nash, George H. The Conservative Intellectual Movement in America Since 1945 . Intercollegiate Studies Institute, 2014. Open Road Media. Panetta, Grace. 2019. “Elizabeth Warren Brings Down the House at CNN LGBT Town Hall With a Fiery Answer on Same-Sex Marriage.” Insider . October 11, 2019. https://www.businessinsider.com/elizabeth-warren-brings-down-house-cnn-lgbt-town-hall-video-2019-10 . Reagan, Ronald. “A Time for Choosing.” Transcript of speech delivered in Los Angeles, CA, October 27, 1964. https://www.reaganlibrary.gov/reagans/ronald-reagan/time-choosing-speech-october-27-1964#:~:text=%22The%20Speech%22%20is%20what%20Ronald,his%20acting%20career%20closed%20out . Sherrin, Ned, ed. Oxford Dictionary of Humorous Quotations . 4th ed. Oxford: Oxford University Press, 2008. Wilson, John. Talking With the President: The Pragmatics of Presidential Language . Oxford: Oxford University Press, 2015.

  • Adithya V. Raajkumar

    Adithya V. Raajkumar “Victorian Holocausts”: The Long-Term Consequences of Famine in British India Adithya V. Raajkumar Abstract: This paper seeks to examine whether famines occur- ring during the colonial period affect development outcomes in the present day. We compute district level measures of economic development, social mobility, and infrastructure using cross-sectional satellite luminosity, census data, and household survey data. We then use a panel of recorded famine severity and rain- fall data in colonial Indian districts to construct cross-sectional counts measures of famine occurrence. Finally, we regress modern day outcomes on the number of famines suffered by a district in the colonial era, with and without various controls. We then instrument for famine occurrence with climate data in the form of negative rainfall shocks to ensure exogeneity. We find that districts which suffered more famines during the colonial era have higher levels of economic development; however, high rates of famine occurrence are also associated with a larger percentage of the labor force working in agriculture, lower rural consumption, and higher rates of income inequality. We attempt to explain these findings by showing that famine occurrence is simultaneously related to urbanization rates and agricultural development. Overall, this suggests that the long-run effects of natural disasters which primarily afflict people and not infrastructure are not al- ways straightforward to predict. 1. Introduction What are the impacts of short-term natural disasters in the long-run, and how do they affect economic development? Are these impacts different in the case of disasters which harm people but do not affect physical infrastructure? While there is ample theoretical and empirical literature on the impact of devastating natural disasters such as hurricanes and earthquakes, there are relatively few studies on the long-term consequences of short-term disasters such as famines. Further- more, none of the literature focuses on society-wide development outcomes. The case of colonial India provides a well-recorded setting to examine such a question, with an unfortunate history of dozens of famines throughout the British Raj. Many regions were struck multiple times during this period, to the extent that historian Mike Davis characterizes them as “Victorian Holocausts” (Davis 2001 p.9). While the short-term impacts of famines are indisputable, their long-term effects on economic development, perhaps through human development patterns, are less widely understood. The United Kingdom formally ruled India from 1857 to 1947, following an ear- lier period of indirect rule by the East India Company. The high tax rate imposed on peasants in rural and agricultural India was a principal characteristic of British governance. Appointed intermediaries, such as the landowning zamindar caste in Bengal, served to collect these taxes. Land taxes imposed on farmers often ranged from two-thirds to half of their produce, but could be as high as ninety to ninety-five percent. Many of the intermediaries coerced their tenants into farming only cash crops instead of a mix of cash crops and agricultural crops (Dutt 2001). Aside from high taxation, a laissez-faire attitude to drought relief was another principal characteristic of British agricultural policy in India. Most senior officials in the imperial administration believed that serious relief efforts would cause more harm than they would do good and consequently, were reluctant to dispatch aid to afflicted areas (ibid). The consequences of these two policies were some of the most severe and frequent famines in recorded history, such as the Great Indian Famine of 1893, during which an estimated 5.5 to 10.3 million peasants perished from starvation alone, and over 60 million are believed to have suffered hardship (Fieldhouse 1996). Our paper focuses on three sets of outcomes in order to assess the long-term impact of famines. First, we measure macroeconomic measures of overall development, such as rural consumption per capita and the composition of the labor force. We also use nighttime luminosity gathered from satellite data as a proxy for GDP, of which measurement using survey data can be unreliable. Second, we look at measures of human development: inequality, social mobility, and education, constructed from the India Human Development Survey I and II. Finally, we examine infrastructure, computing effects on village-level electrification, numbers of medical centers, and bus service availability. To examine impacts, we regress famine occurrence on these outcomes via ordinary least-squares (OLS). We use an instrumental-variables (IV) approach to ensure a causal interpretation via as-good-as-random assignment (1). We first estimate famine occurrence, the endogenous independent variable, as a function of rainfall shocks–a plausibly exogenous instrument–before regressing outcomes on predicted famine occurrence via two-stage least-squares (2SLS). Since the survey data are comparatively limited, we transform and aggregate panel data on rainfall and famines as counts in order to use them in a cross-section with the contemporary outcomes. We find for many outcomes that there is indeed a marginal effect of famines in the long-run, although where it is significant it is often quite small. Where famines do have a significant impact on contemporary outcomes, the results follow an interesting pattern : a higher rate of famine occurrence in a given district is associated with greater economic development yet worse rural outcomes and higher inequality. Specifically, famine occurrence has a small but positive impact on nighttime luminosity–our proxy for economic development–and smaller, negative impacts on rural consumption and the proportion of adults with a college education. At the same time, famine occurrence is also associated with a higher proportion of the labor force being employed in the agricultural sector as well as a higher level of inequality as measured by the Gini index (2). Moreover, we find limited evidence that famine occurrence has a slightly negative impact on infrastructure as more famines are associated with reduced access to medical care and bus service. We do not find that famines have any significant impact on social mobility–specifically, intergenerational income mobility–or infrastructure such as electrification in districts. This finding contradicts much of the established literature on natural disasters, which has predominantly found large and wholly negative effects. We at- tempt to explain this disparity by analyzing the impact of famines on urbanization rates to show that famine occurrence may lead to a worsening urban-rural gap in long-run economic development. Thus, we make an important contribution to the existing literature and challenge past research with one of our key findings: short-term natural disasters which do not destroy physical infrastructure may have unexpectedly positive outcomes in the long-run. While the instrumental estimates are guaranteed to be free of omitted variable bias, the OLS standard errors allow for more precise judgments due to smaller confidence intervals. In around half of our specifications, the Hausman test for endogeneity fails to reject the null hypothesis of exogeneity, indicating that the ordinary-least squares and instrumental variables results are equally valid (3). How- ever, the instrumental variables estimate helps address other problems, such as attenuation bias, due to possible measurement error (4). Section 2 presents a review of the literature and builds a theoretical framework for understanding the impacts of famines on modern-day outcomes. Section 3 describes our data, variable construction, and summary statistics. Sections 4 and 5 present our results using ordinary least-squares and instrumental two-stage least- squares approaches. Section 6 discusses and attempts to explain these results. 2. Review and Theoretical Framework 1. The Impact of Natural Disasters Most of the current literature on natural disasters as a whole pertains to physical destructive phenomena such as severe weather or seismic events. Moreover, most empirical studies, such as Nguyen et al (2020) and Sharma and Kolthoff (2020) , focus on short-run aspects of natural disasters relating to various facets of proxi- mate causes (Huff 2020) or pathways of short-term recovery (Sharma and Kolthoff 2020). Famines are a unique kind of natural disaster in that they greatly affect crops, people, and animals but leave physical infrastructure and habitation relatively unaffected. We attempt to take this element of famines into account when explaining our results. Of the portion of the literature that focuses on famines, most results center on individual biological outcomes such as height, nutrition, (Cheng and Hui Shui 2019) or disease (Hu et al. 2017. A percentage of the remaining studies fixate on long-term socioeconomic effects at the individual level (Thompson et al. 2019). The handful of papers that do analyze broad long-term socioeconomic outcomes, such as Ambrus et al. (2015) and Cole et al. (2019), all deal with either long-term consequences of a single, especially severe natural disaster or the path dependency effects that may occur because of the particular historical circumstances of when a disaster occurs, such as in Dell (2013). On the other hand, our analysis spans several occurrences of the same type of phenomenon in a single, relatively stable sociohistorical setting, thereby utilizing a much larger and more reliable sample of natural disasters. Thus, our paper is the first to examine the long-term effects of a very specific type of natural disaster, famine, on the overall development of an entire region, by considering multiple occurrences thereof. Prior econometric literature on India’s famine era has highlighted other areas of focus, such as Burgess and Donaldson (2012), which shows that trade openness helped mitigate the catastrophic effects of famine. There is also plenty of historical literature on the causes and consequences of the famines, most notable in academic analyses from British historians (contemporarily, Carlyle 1900 and Ewing 1919; more recently Fieldhouse), which tend to focus on administrative measures, or more specifically, the lack thereof. In terms of the actual effects of famine, all of the established literature asserts that natural disasters overwhelmingly influence economic growth through two main channels: destruction of infrastructure and resulting loss of human capital (Lima and Barbosa 2019, Nguyen et al. 2020, Cole et al. 2019), or sociopolitical historical consequences, such as armed conflict (Dell 2013, Huff 2019). Famines pose an interesting question in this regard since they tend to result in severe loss of human capital through population loss due to starvation but generally result in smaller-scale infrastructure losses (Agbor and Price 2013). This is especially the case for rural India, which suffered acute famines while having little infrastructure in place (Roy 2006). We examine three types of potential outcomes: overall economic development, social mobility, and infrastructure, as outlined in section three. Our results present a novel finding in that famine occurrence seems to positively impact certain outcomes while negatively impacting most others, which we attempt to explain by considering the impact of famines on urbanization rates. Famines can impact outcomes through various mechanisms; therefore, we leave the exact causal mechanism unspecified and instead treat famines as generic shocks with subsequent recovery of unknown speed. If famines strike repeatedly, their initial small long-term effects on outcomes can escalate. In order to distinguish long- run effects of famines, we construct a simple growth model where flow variables such as growth quickly return to the long-run average after the shock, but stock variables such as GDP or consumption only return to the average asymptotically. Our intuition for the basis of distinguishing a long-run effect of famines rests on a simple growth model in which flow variables such as growth quickly return to the long-run average after a shock, but stock variables such as GDP or consumption only return to the average asymptotically (5). Thus, over finite timespans, the differences in stock variables between districts that undergo famines and those that do not should be measurable even after multiple decades. As mentioned below, this is in line with more recent macroeconomic models of natural disasters such Hochrainer (2009) and Bakkensen and Barrage (2018). Assume colonial districts (indexed by i ) suffer n i famines over the time period (in our data, the years 1870 to 1930), approximated as average constant rates f i . The occurrence of famine can then be modeled by a Poisson process with interval parameter f i , which represents the expected time between famines–even though the exact time is random and thus unknown–until it is realized (6). For simplicity, we assume that famines cause damage d to a district’s economy, for which time r i is needed to recover to its assumed long-run, balanced growth path (7). We make no assumptions on the distributions of d and r i except that r i is dependent on d and that the average recovery time E[ r i] is similarly a function of E[ d ]. If the district had continued on the growth path directly without the famine, absent any confounding effects, it would counterfactually have more positive out- comes today by a factor dependent on niE[tf] and thus n i , the number of famines suffered. We cannot observe the counterfactuals (the outcome in the affected district had it not experienced a famine), so instead, we use the unaffected districts in the sample as our comparison group. Controlling for factors such as population and existing infrastructure, each district should provide a reasonably plausible counterfactual for the other districts in terms of the number of famines suffered. Then, the differences in outcomes among districts measured today, y i , can be modeled as a function of the differences in the number of famines, n i . Finally, across the entire set of districts, this can be used to represent the average outcome E[y i ] as a function of the number of famines, which forms the basis of our ordinary-least squares approach in section four. This assumes that the correlation between famine occurrence and outcome is equal to 0. To account for the possibility their correlation is non-zero, we also use rainfall shocks to isolate the randomized part of our independent variable in order to ensure that famine occurrence is uncorrelated with our outcome variables. The use of rainfall shocks, in turn, forms the basis of our instrumental variables approach in section five. The important question is the nature of the relationship between d and ri . While f can be easily inferred from our data, d and especially r are much more difficult to estimate without detailed, high-level, and accurate data. Since the historical record is insufficiently detailed to allow precise estimation of the parameters of such a model, we constrain the effects of famine to be linear in our estimation in sections four and five. 2. Estimation Having constrained the hypothesized effects of famine to be linear, in section four, we would prefer to estimate (1) below, where represents our estimate of the effect of famine severity ( famine i), measured as the number of famines undergone by the district, on the outcome variable outcome yi, and Xi is a vector of contem- porary (present-day) covariates, such as mean elevation and soil quality. The con- stant term captures the mean outcomes across all districts andis a district-specific error term. Much of the research on famine occurrence in colonial India attributes the occurrence of famines and their consequences to poor policies and administration by the British Raj. If this is the case, and these same policies hurt the development of districts in other ways, such as by stunting industrialization directly, then the estimation of (1) will not show the correct effect of famines per se on comparative economic development. Additionally, our observations of famines, which are taken indirectly from district-level colonial gazetteers and reports, may be subject to “measurement” error that is non-random. For example, the reporting of famines in such gazetteers may be more accurate in well-developed districts that received preferential treatment from British administrators. To solve this problem, we turn to the examples of Dell et al. (2012), Dell (2013), Hoyle (2010), and Donaldson and Burgess (2012), who use weather shocks as instruments for natural disaster severity. While Dell (2013) focuses on historical consequences arising from path dependency and Hoyle (2010) centralizes on productivity, the instrumental methodology itself is perfectly applicable to our work. Another contribution of our pa- per is to further the use of climate shocks as instruments. We expand upon the usage of climate shocks as instruments because they fit the two main criteria for an instrumental variable. Primarily, weather shocks are extremely short-term phenomena, so their occurrence is unlikely to be correlated with longer-term climate factors that may impact both historical and modern outcomes. Secondly, they are reasonably random and provide exogenous variation with which we can estimate the impact of famines in an unbiased manner. We first estimate equation (2) below before estimating (1) using the predicted occurrence of famine from (2): We calculate famine as the number of reported events occurring in our panel for a district and rainfall as the number of years in which the deviation of rainfall from the mean falls below a certain threshold, nominally the fifteenth and tenth percentiles of all rainfall deviations for that district. As in (1), there is a constant term and error term. As is standard practice, we include the control variables in the first-stage even though they are quite plausibly unrelated to the rainfall variable. This allows us to estimate the impacts of famine with a reasonably causal interpretation; since the assignment of climate shocks is ostensibly random, using them to “proxy” for famines in this manner is akin to “as good as random” estimation. The only issue with this first-stage specification is that while we instrument counts of famine with counts of lo w rainfall years, the specific years in which low rainfall occurs theoretically need not match up with years in which famine is recorded in a given district. Therefore, we would prefer to estimate (3) below instead, since it provides additional identification through a panel dataset. Any other climate factors should be demeaned out by the time effects. Other district characteristics that may influence agricultural productivity and therefore famine severity, such as soil quality, should be differenced out with district effects, represented by the parameters. Differences in administrative policy should be resolved with provincial fixed effects. Unfortunately, we would then be unable to implement the standard instrumental variables practice of including the control variables in both stages since our modern-day outcomes are cross-sectional (i.e, we only have one observation per district for those measures). Nevertheless, our specification in (2) should reason- ably provide randomness that is unrelated to long-term climate factors, as mentioned above. Finally, we collapse the panel by counting the number of famines that occur in the district over time in order to compare famine severity with our cross-sectional modern-day outcomes and to get an exogenous count measure of famine that we can use de novo in (1). To account for sampling variance in our modern-day estimates, we use error weights constructed from the current population of each district meaning that our approach in section 5 is technically weighted least-squares, not ordinary. While this should account for heteroscedasticity in the modern observations, we use robust SM estimators in our estimations (McKean 2004, Barrera and Yohai 2006) to assure that our standard errors on the historical famine and rainfall variables are correct (8). The results of these approaches are detailed in section six. 3. Data 1. Sources and Description Our principal data of interest is a historical panel compiled from a series of colonial district gazetteers by Srivastava (1968) and details famine severity at the district level over time in British India from 1870 to 1930. Donaldson and Burgess (2010) then code these into an ordinal scale by using the following methodology: 4 – District mentioned in Srivastava’s records as “intensely affected by famine” 3 – District mentioned as “severely affected” 2 – Mentioned as “affected” 1 – Mentioned as “lightly affected” 0 – Not mentioned 9 – Specifically mentioned as being affected by spillover effects from a neighbor- ing district (there are only four such observations, so we exclude them) In our own coding of the data, we categorize famines as codes 2, 3, and 4, with severe famines corresponding to codes 3 and 4. We compute further cross-sectional measures, chiefly the total number and proportion of famine-years that a district experienced over the sixty-year periods. This is equivalent to tabulating the frequency of code occurrences and adding the resulting totals for codes 2 to 4 to obtain a single count measure of famine. Our results are robust to using “severe” (codes 3 and 4) famines instead of codes 2, 3, and 4. Across the entire panel, codes from 0 to 4 occurred with the following frequencies: 4256, 35, 207, 542, and 45 respectively. We also supplemented this panel with panel data on rainfall over the same time period. Several thousand measuring stations across India collected daily rainfall data over the time period, which Donaldson (2012) annualizes and compares with crop data. The rainfall data in Donaldson (2012) represents the total rainfall in a given district over a year, categorized by growing seasons of various crops (for ex- ample, the amount of total rainfall in a district that fell during the wheat growing season). Since different districts likely had different shares of crops, we average over all crops to obtain an approximation of total rainfall over the entire year. We additionally convert this into a more relevant measure in the context of famine by considering only the rainfall that fell during the growing seasons of crops typically grown for consumption in the dataset; those being bajra, barley, gram (bengal), jowar (sorghum), maize, ragi (millet), rice, and wheat. Finally, to ensure additional precision over the growing season, we simply add rainfall totals during the grow- ing seasons of the two most important food crops - rice and wheat - which make up over eighty percent of food crops in the country (World Bank, UN-FAOSTAT). The two crops have nearly opposite growing seasons, so the distribution of rainfall over the combined growing seasons serves as an approximation of total annual rainfall. Our results are robust with regards to all three definitions; the pairwise correlations between the measures are never less than ninety percent. Moreover, the cross-sectional famine instruments constructed from these are almost totally identical as the patterns in each type of rainfall (that is, their statistical distributions over time) turn out to be the same. As expected, there appears to be significant variation in annual rainfall. The ex- ample of the Buldana district (historically located in the Bombay presidency, now in Maharashtra state) highlights this trend, as shown in Figure 1 on the following page. In general, the trends for both measures of rainfall over time are virtually in- distinguishable aside from magnitude. As anticipated, famine years are marked by severe and/or sustained periods of below-average rainfall although the correlation is not perfect. There are a few districts which have years with low rainfall and no recorded famines, but this can mostly be explained by a lack of sufficient records, especially in earlier years. On the opposite end of the spectrum, there are a few districts that recorded famines despite above-average rainfall, which could possibly be the result of non-climatic factors such as colonial taxation policies, conflicts, or other natural disasters, such as insect plagues. However, the relationship between rainfall patterns and famine occurrence suggests that we can use the former as an instrument for the latter especially since the correlation is not perfect, and famine occurrence is plausibly non-random due to the impact of British land ownership policies. Figure 1: Rainfall over time for Buldana from 1870 to 1920 Notes : The dashed line shows mean rainfall for all food crops; the solid line shows the total rainfall over the wheat and rice growing seasons. The blue and purple lines represent the historical means for these measures of rainfall. The rad shading denotes years in which famines are recorded as having affected the district. We construct count instruments for famines by first computing the historic mean and annual deviation for rainfall in each district. We can then count famines as years in which the deviation was in the bottom fifteenth percentile in order to capture relatively severe and negative rainfall shocks as plausible famine causes. For severe famines, we use the bottom decile instead. The percentiles were chosen based on famine severity so that the counts obtained using this definition were as similar as possible to the actual counts constructed from recorded famines (see above) in the panel dataset. For modern-day outcomes, we turn to survey data from the Indian census as well as the Indian Human Development Survey II, which details personal variables (ex. consumption and education), infrastructure measures (such as access to roads), and access to public goods (ex. hospital availability) at a very high level of geographical detail. An important metric constructed from the household development surveys is that of intergenerational mobility as measured by the expected income percentile of children whose parents belonged to a given income percentile, which we obtain from Novosad et al. (2019). Additionally, as survey data can often be unreliable, we supplement these with an analysis of satellite luminosity data, which provides measures of the (nighttime) luminosity of geographic cells, which should serve as a more reliable proxy for economic development, following Henderson et al. (2011) and Pinkovsky and Sala-i-Martin (2016). These data are mostly obtained from Novosad et. al (2018, 2019) and Iyer (2010), which we have aggregated to the district level. The outcomes variables are as follows: 1. Log absolute magnitude per capita. We intend this to serve as a proxy for a district’s economic development in lieu of reliable GDP data. This is the logarithm of the total luminosity observed in the district divided by the district’s population. These are taken from Vernon and Storeygard (2011) by way of Novosad et al. (2018). 2. Log rural consumption per capita. This is taken from the Indian Household Survey II by way of Novosad et al. (2019). 3. Share of the workforce employed in the cultivation sector, intended as a mea- sure of rural development and reliance on agriculture (especially subsistence agri- culture). This is taken from Iyer et al. (2010). 4. Gini Index, from Iyer (2010), as a measure of inequality. 5. Intergenerational income mobility (father-son pairs), taken from Novosad et al. (2018). Specifically, we consider the expected income percentile of sons in 2012 whose fathers were located in the 25th percentile for household income (2004), using the upper bound for robustness (9). 6. The percentage of the population with a college degree, taken from census data. 7. Electrification, i.e. the percent of villages with all homes connected to the power grid (even if power is not available twenty-four hours per day). 8. Percent of villages with access to a medical center, taken from Iyer (2010), as a measure of rural development in the aspect of public goods. 9. Percent of villages with any bus service, further intended as a measurement of public goods provision and infrastructure development. Broadly speaking, these can be classified into three categories with 1-3 representing broad measures of economic development, 4-6 representing inequality and human capital, and 7-9 representing the development of infrastructure and the provision of public goods. As discussed in section two, our preliminary hypothesis is that the occurrence of famines has a negative effect on district development, which is consistent with most of the literature on disasters. Hence, given a higher occurrence of famine, we expect that districts suffering from more famines during the colonial period will be characterized by lower levels of development, being (1) less luminous at night, (2) poorer in terms of a lower rural consumption, and (3) more agricultural, i.e have a higher share of the labor force working in agriculture. Similarly, with regards to inequality and human capital, we expect that more famine-afflicted districts will have (4) higher inequality in terms of a higher Gini index, (5) lower upward social mobility in terms of a lower expected income percentile for sons whose fathers were at the 25th income percentile, and (6) a lower percentage of adults with a college education. Finally, by the same logic, these districts should be relatively underdeveloped in terms of infrastructure, and thus (7) lack access to power, (8) lack access to medical care, and (9) lack access to transportation services. Finally, even though our independent variable when instrumented should be exogenous, we attempt to control for geographic and climatic factors affecting agriculture and rainfall in each district, namely: - Soil type and quality (sandy, rocky or barren, etc.) - Latitude (degree) and mean temperature (degrees Celsius) - Coastal location (coded as a dummy variable) - Area in square kilometers (it should be noted that district boundaries correspond well, but not perfectly, to their colonial-era counterparts) As mentioned previously, research by Iyer and Banerjee (2008, 2014) suggests that the type of land-tenure system implemented during British rule has had a huge impact on development in the districts (10). We also argue that it may be re- lated to famine occurrence directly (for example, in that tenure systems favoring landlords may experience worse famines), in light of the emerging literature on agricultural land rights, development, and food security (Holden and Ghebru 2016, Maxwell and Wiebe 1998). Specifically, we consider specifications with and without the proportion of villages in the district favoring a landlord or non-land- lord tenure system, obtained from Iyer (2010). In fact, the correlation between the two variables in our dataset is slightly above 0.23, which is not extremely high but enough to be of concern in terms of avoiding omitted variable bias. We ultimately consider four specifications for each dependent variable based on the controls in X from equation (1): no controls, land tenure, geography, and land tenure with geography. Each of these sets of controls addresses a different source of omitted variable bias: the first, land-tenure, addresses the possibility of British land-tenure policies causing both famines and long-term development outcomes. The second, geography, addresses the possibility of factors such as mean elevation and temperature impacting crop growth while also influencing long-term development (for example, if hilly and rocky districts suffer from more famines because they are harder to grow crops in but also suffer from lower development because they are harder to build infrastructure in or access via transportation). We avoid using contemporary controls for the outcome variables (that is, including infrastructure variables, income per capita, or welfare variables in the right- hand side) because many of these could reasonably be the result of the historical effects (the impact of famines) we seek to study. As such, including them as controls would artificially dilate the impact of our independent variable. 2. Summary statistics Table I presents summary statistics of our cross-sectional dataset on the follow- ing page. One cause for potential concern is that out of the over 400 districts in colonial India, we have only managed to capture 179 in our sample. This is due chiefly to a paucity of data regarding rainfall; there are only 191 districts captured in the original rainfall data from Donaldson (2012). In addition, the changing of district names and boundaries over time makes the matching of old colonial districts with modern-day administrative subdivisions more imprecise than we would like. Nevertheless, these districts cover a reasonable portion of modern India as well as most of the regions which underwent famines during imperial rule. The small number of districts may also pose a problem in terms of the standard errors on our coefficients, as the magnitude of the impacts of famines that occurred over a hundred years ago on outcomes today is likely to be quite small. Table 1 – Summary Statistics Source : Author calculations, from Iyer (2010), Iyer and Bannerjee (2014), Novosad et. al (2018), Asher and No- vosad (2019), Donaldson and Burgess (2012). 4. Ordinary Least Squares Although we suspect that estimates of famine occurrence and severity based on recorded historical observations may be nonrandom for several reasons (mentioned in section two and three), we first consider direct estimation of (1) from section two. For convenience, equation (1) is reprinted below: As in the previous section, famine refers to the number of years that are coded 2, 3, or 4 in famine severity as described in Srivastava (1968). X is the set of con- temporary covariates, also described in section three. We estimate four separate specifications of (1) where X varies: 1. No controls, i.e. X is empty. 2. Historical land tenure, to capture any effects related to British land policy in causing both famines and long-term developmental outcomes. 3. Geographical controls relating to climatic and terrestrial factors, such as temperature, latitude, soil quality, etc. 4. Both (2) and (3). Table II presents the estimates for the coefficients on famines and tenure for our nine dependent variables on the following page (we omit coefficients and confidence intervals for the geographic variables for reasons of brevity and relevance in terms of interpretation). In general, the inclusion or exclusion of controls does not greatly change the magnitudes of the estimates nor their significance, except for a few cases. We discuss effects for each dependent variable below: Log of total absolute magnitude in the district per capita : The values for famine suggest that interestingly, each additional famine results in anywhere from 1.8 to 3.6 percent more total nighttime luminosity per person in the district. As mentioned in section three, newer literature shows that nighttime luminosity is a far more reliable gauge of development than reported survey measures such as GDP, so this result is not likely due to measurement error. Thus, as the coefficient on famine is positive, it seems that having suffered more famines is positively related to development. This in fact is confirmed by the instrumental variables (IV) estimates in Table III (see section five). Curiously, the inclusion of tenure and geography controls separately does not change the significance, but including both of them together in the covariates generates far larger confidence intervals than expected and reduces the magnitude of the effect by an entire order of magnitude. This may be because each set of controls tackles a different source of omitted variable bias. As expected, however, land tenure plays a significant role in predicting a district’s development; even a single percent increase in the share of villages with a tenant-favorable system is associated with a whopping 73-80% additional night- time luminosity per person. Log rural consumption per capita : We find evidence that additional famines are associated with lower rural consumption, albeit on a minuscule scale. This suggests that the beneficial effect of famines on development may not be equal across urban and rural areas but instead concentrated in cities. For example, there might be a causal pathway that implies faster urbanization in districts that undergo more famines. Unlike with luminosity, historical land tenure does not seem to play a role in rural consumption. Percent of the workforce employed in cultivation : As expected, additional famines seem to play a strongly significant but small role with regards to the labor patterns in the district. Districts with more famines seem to have nearly one percent of the labor force working in cultivation for each additional famine, suggesting famines may inhibit development of industries other than agriculture and cultivation. Our instrumental variables estimates confirm this. Puzzlingly, land tenure does not seem to be related to this very much at all. Gini Index : The coefficients for the number of famines seem to be difficult to interpret as both those for the specification with no controls and with both sets of controls are statistically significant with similar magnitudes yet opposite signs. The confidence interval for the latter is slightly narrower. This is probably because the true estimate is zero or extremely close to zero, and the inclusion or exclusion of controls is enough to narrowly affect the magnitude to as to flip the sign of the co- efficient. In order to clarify this, more data is needed – i.e for more of the districts in colonial India to be matched in our original sample. At the very least, we can say that land tenure clearly has a large and significant positive association with in- equality. Unfortunately, this association cannot be confirmed as causal due to the lack of an instrument for land tenure which covers enough districts of British India. However, as Iyer and Banerjee (2014) argue, the assignment of tenure systems itself was plausibly random (having been largely implemented on the whims of British administrators) so that one could potentially interpret the results as causal with some level of caution. Intergenerational income mobility : Similarly, we do not find evidence of an association between the number of famines suffered by a district in the colonial era and social mobility in the present day, but we do find a strong impact of land tenure, which makes sense to the reported institutional benefits of tenant-favorable systems in encouraging development as well as the obvious benefits for the tenants and their descendants themselves. Each one-percent increase in the share of villages in a district that uses a tenant-favorable system in the colonial era is associated with anywhere from ten to thirteen percent higher expected income percentile for sons whose fathers were at the 25th percentile in 1989 although the estimates presented in Table II are an upper bound. College education : We find extremely limited evidence that famines in the colonial period are associated with less human capital in the present day, with a near-zero effect of additional famines on the share of adults in a district with a college degree (in fact, rounded to zero with five to six decimal places). Land tenure similarly has very little or no effect. Electrification, access to medical care , bus service : All three of these infra- structure and public goods variables show a negligible effect of famines, but strong impacts of historical land tenure. Ultimately, we find that famines themselves seem to have some positive impact on long-term development despite also being associated with many negative out- comes, such as a greater share of the workforce employed in agriculture (i.e as opposed to more developed activities such as manufacturing or service). Another finding of note is that while famines do not seem to have strong associations with all of our measures, land tenure does. This suggests that the relationship between land-tenure and famine is worth looking into. The existence of bias in the recording of famines, as well as the potential for factors that both cause famines while simultaneously affecting long-term outcomes, present a possible problem with these estimates. We have already attempted to account for one of those, namely historical land tenure systems. Indeed, in most of the specifications, including tenure in the regression induces a decrease in the magnitude of the coefficient on famine. As the effect of famine tends to be extremely small to begin with, the relationship is not always clear. Other errors are also possible. For example, it is possible that a given district experienced a famine in a given year, but insufficient records of its occurrence remained by 1968. Then, Srivastiva (1968) would have assigned that district a code of 0 for that year, but the correct code should have been higher. Indeed, as described in section three, a code of 0 corresponds to a code of “not mentioned”, which encompasses both “not mentioned at all” and “not mentioned as being affected by famine” (Donaldson and Burgess 2010). While measurement error in the dependent variable is usually not a problem, error in the independent variable can lead to attenuation bias in the coefficients since the ordinary least-squares algorithm minimizes the error on the dependent variable by estimating coefficients for the independent variables. The greater this error, the more the ordinary least- squares method will bias the estimated coefficients towards zero in an attempt to minimize error in the dependent variable (Riggs et al. 1978). For these reasons, we turn to instrumental variables estimation in section five in an attempt to provide additional identification. Table 2 – Ordinary Least-Square Estimates Notes : Independent variable is number of with recorded famines (famine code of 2 or above). Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). 5. Weather Shocks as an Instrument for Famine Severity As explained in section two, there are many possible reasons why recorded famine data may not be exogenous. In any case, it would be desirable to have a truly exogenous measure of famine, for which we turn to climate data in the form of rainfall shocks. Rainfall is plausibly connected to the occurrence of famines, especially in light of the colonial government’s laissez-faire approach to famine relief (Bhatia 1968). For example, across all districts, mean rainfall averaged around 1.31m in years without any famine and around 1.04m in districts which were at least somewhat affected by famine (code 1 or above). Figure 2 below shows that there is a very clear association between rainfall activity and famines in colonial India, although variability in climate data as well as famine and agricultural policy means that there are some high-rainfall districts which do experience famines as well as low-rainfall districts which do not experience as many famines, as noted in section three. Figure 2: Associations between famine occurrence and rainfall trends It should be clear from the first three scatterplots above that there is a negative relationship between the amount of rainfall a district receives and the general prevalence of famine but more importantly, the total size of the rainfall shocks and the total occurrences of famine in that district. From the final plot we see that when we classify low-rainfall years by ranking the deviations from the mean, counting the number of years in which these deviations are in the bottom fifteenth percentile corresponds well to the actual number of recorded famines for each district. In order to use this to measure famine exogenously, we first estimate (2) (see below, section two and section three) where we predict the number of famines from the number of negative rainfall shocks as represented by deviation from the mean in the bottom fifteen percent of all deviations before estimating (1) using this predicted estimate of famine in place of the recorded values. Our reduced form11 estimates, where we first run (1) using the number of negative rainfall shocks directly, are presented on the following pages in Table III (11). The reduced form equation is shown as (4) below as well: Table 3 – Reduced form estimates for IV Notes : Independent variable is number of years in which deviation of rainfall from the historic mean is in the bottom fifteenth-percentile. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). From Table III, it would appear that negative rainfall shocks have similar effects on the outcome variables as do recorded famines in terms of the statistical significance of the coefficients on the independent variable. There is also the added benefit that we can confirm our very small and slightly negative effects of famines on the proportion of adults with a college education: for each additional year of exceptionally low rainfall in a district, the number of adults with a college education in 2011 decreases by 0.1%. In addition, whereas the coefficients in Table II were conflicting, Table III provides evidence in favor of the view that additional famines increase inequality in a district as measured by the Gini index. However, the magnitudes of the effects of famines or low-rainfall years are pre- dominantly larger than their counterparts in Table II to a rather puzzling extent. While we stated earlier in section three that famines and rainfall are not perfectly correlated, it might be that variation in historical rainfall shocks can better explain variation in outcomes in the present day. In order to get a better understanding of the relationship between the two, it would first be wise to look at the coefficients presented in Table IV, which are the results of the two-stage least-squares estimation using low-rainfall years as an instrument for recorded famines. Table IV follows the patterns established in Table II and Table III with regards to the significance of the coefficients as well as their signs; famines have a statistically significant and positive impact on nighttime luminosity, a significant negative impact on rural consumption, and a positive impact on the percent of the labor force employed in agriculture. The results with respect to Table II, concerning the impact of famine on the proportion of adults with a college education, are also very similar. Most other specifications do not show a significant effect of famine on the respective outcome with the exception of access to medical care. Unlike in Table II and Table III, each additional famine is associated with an additional 11.2 to 12.5 percent of villages in that district having some form of medical center or service readily accessible (according to the specifications with geographic controls, which we argue are more believable than the ones without). However, this relationship breaks down at the level of famines seen in some of our districts; a district having suffered nine or ten famines would see more than 100% of its villages having access to medical centers (which is clearly nonsensical), suggesting we may need to look for nonlinearity in the effects of famine in section six. Unfortunately, unlike in Table III, it seems that we cannot conclude much regarding the effect of famines on intergenerational mobility as the coefficients are contradictory and generally not statistically significant. For example, the coefficient on famine in the model without any controls is highly significant and positive, but the coefficient in the model with all controls is not significant and starkly negative. The same is true for the effect of famines on the Gini index. One possibility is that the positive coefficients on famine for both of these dependent variables are driven by outliers as our data was relatively limited due to factors mentioned in section 2. The magnitudes of the coefficients in Table IV are generally smaller than those presented in Table III but still significantly larger than the ones in Table II. For ex- ample, in Table II, the ordinary least-squares model suggests that each additional historical famine is associated with an additional 0.5 to 0.9 percent of the district’s workforce being employed in cultivation in 2011, but in Table IV, these numbers range from 1.5 to 4.3 percent for the same specifications, representing almost a tenfold increase in magnitude in some cases. One reason for this is the possibility attenuation bias in the ordinary least-squares regression; here, there should not be any attenuation bias in our results as the use of instruments which we assume are not correlated with any measurement error in the recording of famines excludes that possibility (Durbin 1954). On the other hand, the Hausman test for endogeneity (the econometric gold standard for testing a model’s internal validity) often fails to reject the null hypothesis that the recorded famine variable taken from Srivastava (1968) and Donaldson and Burgess (2012) is exogenous. To be precise, in one sense the test fails to reject the null hypothesis that the rainfall data add no new “information”, which is not captured in the reported famine data. It is possible that our rainfall instrument, as used in equation (2) is invalid due to endogeneity with the regression model specified in equation (1) despite being excluded from it. The only way to test this possibility is to conduct a Sargan-Han- sen test12 on the model’s overidentifying restrictions; however, we are unable to conduct the test as we have a single instrument. It follows that our model is not actually overidentified (12). Table 4 –Instrumental Variables Estimates Notes : Independent variable is number of years with recorded famines (famine code of 2 or above), instrumented with number of low-rainfall years (rainfall deviation from historic mean in bottom fifteenth percentile). Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). We also need to consider the viability of our instrumental variables estimates. Table V on the following page offers mixed support. While the weak-instrument test always rejects the null-hypothesis of instrument weakness, for models with more controls, namely those with geographic controls, the first-stage F-values – the test statistics of interest– are relatively small. Which is not encouraging as generally a value of ten or more is recommended to be assured of instrument strength (Staiger and Stock 1997) (13). In Table IV, we show confidence intervals obtained by inverting the Anderson-Rubin test, which accounts for instrument strength in determining the statistical significance of the coefficients. These are wider in the models with more controls, although not usually wide enough to move coefficients from statistically significant to statistically insignificant. However, additional complications arise when considering the Hausman tests for endogeneity. The p-values in Table V suggest that around half of the regression specifications in Table IV do not suffer from a lack of exogeneity, meaning that the ordinary least-squares results are just as valid for those specifications. A more serious issue is that the Hausman test rejects the null-hypothesis of exogeneity for four out of nine outcome variables. Combined with the fact that the first-stage F-statistics are concerningly low for the specifications with geographic controls, this means that not only are the ordinary least-squares results likely to be biased, but the instrumental variables estimates are also likely to be imprecise. This is most concerning for the results related to rural consumption and percent of the workforce in agriculture. Conversely, the results for nighttime luminosity are not affected as the Hausman tests do not reject exogeneity for that outcome variable. While we might simply use the ordinary-least squares results to complement those obtained via two-stage least-squares, the latter are lacking in instrument strength. More importantly, the differences in magnitude between the coefficients presented in Table II and in Table IV are too large to allow this use without abandoning consistency in the interpretation of the coefficients. Ultimately, given that the Hausman tests show that instrumentation is at least somewhat necessary, and the actual p-values for the weak-instrument test are still reasonably low (being less than 0.05 even in the worst case), we prefer to uphold the instrumental variables results as imperfect as some of them may be. We argue that it is better to have un- biased estimates from the instrumental variables procedure (IV), even if they may be less unreliable, than to risk biased results due to endogeneity problems present in ordinary least squares (OLS). Table 5 – Instrumental Variables Diagnostics Notes : The weak-instrument test p-value is obtained from comparison of the first-stage F-statistic with the chi- square distribution with degrees of freedom corresponding to the model (number of data points minus number of estimands). Independent variable is number of years in which deviation of rainfall from the historic mean is in the bottom fifteenth-percentile. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. 6. Discussion Our data suggest that there are long-run impacts of historical famines. Tables II, IV, and VII clearly show that the number of historical famines has a[72] [73] [MOU74] statistically significant, though small impact on the following: average level of economic development as approximated by nighttime luminosity, the share of the population employed in cultivation, consumption, inequality, and the provision of medical services in contemporary Indian districts. There appear to be no discernible effects on intergenerational income mobility or basic infrastructure such as electrification. The effects are quite small and are generally overshadowed by other geographical factors such as climate (i.e., latitude and temperature). They are also small in comparison to the impact of other colonial-era policies such as land-tenure systems. Nevertheless, they are still interesting to observe given that the famines in question occurred nearly a hundred years prior to the measurement of the outcomes in question. We contend that they reveal lasting and significant consequences of British food policy in colonial India. Table IV suggests that a hypothetical district having suffered ten famines - which is not atypical in our data - may have developed as much as ninety-four percent more log absolute magnitude per capita, around forty percent less consumption per capita in rural areas , 150% percent more of the workforce employed in cultivation, and a Gini index nearly ten percent greater than a district which suffered no famines. As to the question of whether or not the famines were directly caused by British policy, the results suggest that, at the very least, British nineteenth-century laissez-faire attitudes to disaster management have had long-lasting consequences for India. Moreover, these estimates are causal as the use of rainfall shocks as instruments provides a means of estimation which is “as good as random.” Therefore, we can confidently state that these effects are truly the result of having undergone the observed famines. In considering whether to prefer our instrumental estimates or our least-squares estimates, we must mainly weigh the problems of a potentially weak instrument versus the benefits of a causal interpretation. We argue that we should still trust the IV estimates even though the instrument is not always as strong as we would like. First of all, the instrumentation of the recorded famine data with the demeaned rainfall data provides plausible causal estimation due to the fact that the rainfall measures are truly as good as random. Even if the recorded famine measure is itself reasonably exogenous as suggested by the Hausman tests, we argue that it is better to be sure. Using instruments for a variable which is already exogenous will not introduce additional bias into the results and may even help reduce attenuation bias from any possible measurement error. The Hausman test, after all, can- not completely eliminate this possibility; it can only suggest how likely or unlikely it is. In this sense, the instrumental estimates allow us to be far more confident in our assessment of the presence or absence of the long-run impact of famines. Though the first-stage F-statistics are less than ten, they are still large enough to reject the null hypothesis of instrument weakness as shown by the p-values for this test in Table V. We argue that it is better to be consistent than pick and choose which set of estimates we want to accept for a given dependent variable and model. We made this choice because the differences in magnitude between the IV and OLS coefficients are too large to do otherwise. A more interesting question raised by the reported coefficients in Table II, Table IV, and Table VII has to do with their sign. Why do districts more afflicted historically by famines seem to have more economic development yet worse out- comes in terms of rural consumption and inequality by our models? This could be due to redistributive preferences associated or possibly even caused by famines; Gualtieri et al. pose this hypothesis in their paper on earthquakes in Italy. We note that districts suffering more famines in the colonial era are more “rural” to- day in that they tend to have a greater proportion of their labor force working in cultivation. This cannot be a case of mere association where more rural districts are more susceptible to famine as our instrumental estimates in Table IV suggest otherwise. Rather, we explore the possibility that post-independence land reform in India was greater in relatively more agricultural districts. Much of the literature on land-tenure suggests that redistributing land from large landowners to smaller farmers is associated with positive effects for productivity and therefore, economic development (Iyer and Banerjee 2005, Varghese 2019). If the historical famines are causally associated with districts having less equal land tenure at independence, then this would explain their positive, though small, impact on economic development by way of inducing more land reform in those districts. On the other hand, if they are causally associated with districts remaining more agricultural in character at independence, and a district’s “agriculturalness” is only indirectly associated with land reform (in they only benefit because they have more agricultural land, so they benefit more from the reform), this would indicate that famines have a small and positive impact on economic development through a process that is less directly causal. Although we are unable to observe land-tenure and agricultural occupations immediately at independence, we are able to supplement our data with addition- al state-level observations of land-reform efforts in Indian states from 1957-1992 compiled in Besley and Burgess (2010) and aggregate the district-level observations of famines in our dataset by state (14). If our hypothesis above is correct, then we should see a positive association between the number of historical famines in a state’s districts and the amount of land-reform legislation passed by that state after independence, keeping in mind that provincial and state borders were almost completely reorganized after independence. Although this data is quite coarse, being on the state level, it is widely available. However, the plot below suggests completely the opposite relationship as each additional famine across the state’s districts appears to be associated with nearly 0.73 fewer land-reform acts. Even after removing the outlier of West Bengal, which underwent far more numerous land reforms due to the ascendancy of the Communist Party of India in that state, the relationship is still quite apparent; every two additional famines are associated with almost one fewer piece of land-reform legislation post-independence. Figure 3: Historical Famine Occurrence vs Post-independence land reforms Figure 3 with West Bengal removed Therefore, there seems to be little evidence that famines are associated with land-reforms at all. This is quite puzzling because it is difficult to see how famine occurrence could lead to positive economic development while hurting outcomes such as inequality, consumption, and public goods provision. One potential explanation is that famines lead to higher urban development while hurting rural development, which would suggest a key impact of famine occurrence is the worsening of an urban-rural divide in economic development. This would explain how high er famine occurrence is linked with higher night-time luminosity, which would itself be positively associated with urbanization but is also linked with lower rural consumption, higher inequality (which may be the result of a stronger rural-urban divide), and a higher proportion of the workforce employed in the agricultural sector. For example, it is highly plausible that famines depopulate rural areas, leaving survivors to concentrate in urban centers, where famine relief is more likely to be available. Donaldson and Burgess (2012), who find that historical famine relief tended to be more effective in areas better served by rail networks, support this explanation. At the same time, the population collapse in rural areas would leave most of the workforce employed in subsistence agriculture going forward. Thus, if famines do lead to more people living in urban areas while simultaneously increasing the proportion of the remaining population employed in agriculture, then they would also exacerbate inequality and worsen rural, economic out- comes. If the urbanization effect is of greater magnitude, this would also explain the slight increase in night-time luminosity and electrification. This is somewhat supported by the plots in Figure 4, in which urbanization is defined as the proportion of a district’s population that lives in urban areas as labeled by the census. It appears that urbanization is weakly associated with famine occurrence (especially when using rainfall shocks) and positively associated with nighttime luminosity and inequality while negatively associated with rural consumption and agricultural employment as hypothesized above. However, instrumental estimates of urbanization as a result of famine detailed in Table VI only weakly support the idea that famine occurrence causally impacts urbanization as only the estimation without any controls is statistically significant. Figure 4: Urbanization Rates vs. Famine occurrence and Development outcomes Notes : The first two plots (in the top row) depict urbanization against famine occurrence and negative rainfall shocks. The rest of the plots depict various outcomes (discussed above) against the urbanization rate. Table VI –Urbanization Vs. Famine Occurrence Notes : Independent variable is percent of a district’s population that is urban as defined in the 2011 Indian census. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). Nevertheless, this represents a far more likely explanation for our results than land reform, especially since the land reform mechanism implies that famine occurrence would be associated with better rural outcomes. In other words, if famines being associated with land-reform at independence was the real explanation behind our results, because the literature on land-reform suggests that it is linked with improved rural development, we would not expect to see such strongly negative rural impacts of famine in our results. Therefore, not only is the explanation of differential urban versus rural development as a result of famine occurrence better supported by our data, it also constitutes a more plausible explanation for our findings. While we do not have enough data to investigate exactly how famine occurrence seems to worsen urban-rural divides in economic development (for example, rural population collapse as hypothesized above), such a question would certainly be a key area of future study. Conclusion In this paper, we have shown that famines occurring in British India have a statistically significant long-run impact on present-day outcomes by using both ordinary least-squares as well as instrumenting for famine with climate shocks in the form of deviated rainfall. In particular, the occurrence of famine seems to ex- acerbate a rural-urban divide in economic development. Famines appear to cause a small increase in overall economic development, but lower consumption and welfare in rural areas while also worsening wealth inequality. This is supported by the finding that famines appear to lead to slightly higher rates of urbanization while simultaneously leading to a higher proportion of a district’s labor force remaining employed in the agricultural sector. Even though our ordinary-least squares measures are generally acceptable, we point to the similar instrumental variable estimates as stronger evidence of the causal impact of the famines. Ultimately, our results demonstrate that negative cli- mate shocks combined with certain disaster management policies, such as British colonial laissez-faire approaches to famine in India, may have significant, though counter-intuitive, impacts on economic outcomes in the long-run. Endnotes 1 One can essentially understand this technique as manipulating the independent variable, which may not be randomly assigned, via a randomly assigned instrument. 2 The Gini index measures the distribution of wealth or income across individuals, with a score of zero corresponding to perfectly equal distribution and a score of one corresponding to a situation where one individual holds all of the wealth or earns all of the income in the group. 3 The Durbin-Wu-Hausman test essentially asks whether adding the instrument changes bias in the model . A rejection of the null hypothesis implies that differences in coefficients between OLS and IV are due to adding the instrument, whereas the null hypothesis assumes that the independent variable(s) are already exogenous and so adding an instrument contributes no new information to the model. 4 Attenuation bias occurs when there is measurement error in the independent variable, which biases estimates downward due to the definition of the least-squares estimator as one which minimizes squared error on the axis of the dependent variable. See Durbin (1954) for a detailed discussion. 5 Classical growth theory, such as in the Solow-Swan (1957) and Romer (1994) implies long-run convergence and therefore that districts would have similar outcomes today regardless of the number of famines they underwent. However, this is at odds with most of the empirical literature as discussed previously, in which there are often measurable long-term effects to natural disasters. 6 A Poisson process models count data via a random variable following a Poisson distribution. 7 Although we use the term damage, the impact to the economy need not be negative – indeed, we find that some impacts of famine occurrence are positive in sections four and five, which we attempt to explain in section seven. 8 Normally, OLS assumes that the variance of the error term is not correlated with the independent variable(s) i.e the errors are homoscedastic. If this is not true, i.e the errors are heteroscedastic, then the standard errors will be too small. Robust least-squares estimation calculates the OLS standard errors in a way that does not depend on the assumption that the errors are homoscedastic. 9 So, for example, if this value is 25, then there is on average no mobility on average, as sons would be expected to remain in the same income percentile as their fathers. Similarly, if it is less than (greater than) 25, then there would be downward (upward) mobility. A value of 50 would indicate perfect mobility, i.e no relationship between fathers’ income percentiles and those of their sons. 10 For a brief overview of the types of systems employed by the East India Company and Crown administrators, see Iyer and Banerjee (2008), or see Tirthankar (2006) for a more detailed discussion. 11 While reduced form estimates–that is, estimating the outcomes as direct functions of the exogenous variables rather than via a structural process–are often not directly interpretable, they can serve to confirm the underlying trends in the data (for example, via the sign of the coefficients), which is why we choose to include them here. 12 The Sargan-Hansen test works very similarly to the Durbin-Wu-Hausman test, but instead uses a quadratic form on the cross-product of the residuals and instruments. 13 To be precise, this heuristic is technically only valid with the use of a single instrument, which is of course satisfied in our case anyway. 14 To be clear, the value of famine for each state is technically the average number of famines in the historical districts that are presently part of the state, since subnational boundaries were drastically reorganized along linguistic lines after independence. Bibliography Agbor, Julius A., and Gregory N. Price. 2014. “Does Famine Matter for Aggregate Adolescent Human Capital Acquisition in Sub-Saharan Africa?” African Development Review/Revue Africaine de Développement 26 (3): 454–67. Am brus, Attila, Erica Field, and Robert Gonzalez. 2020. “Loss in the Time of Cholera: Long-Run Impact of a Disease Epidemic on the Urban Land- scape.” American Economic Review , 110 (2): 475-525. Anand, R., Coady, D., Mohommad, A., Thakoor, V. V., & Walsh, J. P. 2013. “The Fiscal and Welfare Impacts of Reforming Subsidies in India”. The Inter- national Monetary Fund, IMF Working Papers 13/128. Anderson, T.W. and Rubin, H. 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 20, 46-63. Asher, Sam, Tobias Lunt, Ryu Matsuura, and Paul Novosad. 2019. The Socioeconomic High- Resolution Rural-Urban Geographic Dataset on India. Asher, Sam and Novosad, Paul. 2019. “Rural Roads and Local Economic Development”. American Economic Review (forthcoming). Web. Bakkensen, Laura and Lint Barrage. 2018. “Do Disasters Affect Growth? A Macro Model-Based Perspective on the Empirical Debate”. IMF Workshop on Macroeconomic Policy and Income Inequality. Bannerjee, Abhijit and Lakshmi Iyer. 2005. “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”. American Economic Review 95(4) pp. 1190- 1213. Besley, Timothy and Burgess, Robin. 2000. Land reform, poverty reduction and growth: evidence from India. Quarterly Journal of Economics, 115 (2). pp. 389-430. Bhatia, B.M. 1968. Famines in India. A Study in Some Aspects of the Economic History of India (1860- 1965) London: Asia Publishing House. Print. Bose, Sugata and Ayesha Jalal. 2004. Modern South Asia: History, Culture, Political Economy (2nd ed.) Routledge. Brekke, Thomas. 2015. “Entrepreneurship and Path Dependency in Regional Development.” Entrepreneurship and Regional Development 27 (3–4): 202–18. Burgess, Robin and Dave Donaldson. 2010. “Can Openness Mitigate the Effects of Weather Shocks? Evidence from India’s Famine Era”. American Economic Review 100(2), Papers and Proceedings of the 122nd Annual Meeting of the American Economic Association pp. 449-453. Carlyle, R. W. 1900. “Famine Administration in a Bengal District in 1896-7.” Economic Journal 10: 420–30. Cheng, Wenli, and Hui Shi. 2019. “Surviving the Famine Unscathed? An Analysis of the Long-Term Health Effects of the Great Chinese Famine.” Southern Economic Journal 86 (2): 746–72. Cohn, Bernard S. 1960. “The Initial British Impact on India: A case study of the Benares region.” The Journal of Asian Studies. Association for Asian Studies. 19 (4): 418–431. Cole, Matthew A., Robert J. R. Elliott, Toshihiro Okubo, and Eric Strobl. 2019. “Natural Disasters and Spatial Heterogeneity in Damages: The Birth, Life and Death of Manufacturing Plants.” Journal of Economic Geography 19 (2): 373–408. Davis, Mike. 2001. Late Victorian Holocausts: El Niño Famines and the Making of the Third World . London: Verso. Print. Dell, Melissa, Benjamin F. Jones, and Benjamin A. Olken. 2012. “Temperature Shocks and Economic Growth: Evidence from the Last Half Century.” American Economic Journal: Macroeconomics , 4 (3): 66-95. Dell, Melissa. 2013.“Path dependence in development: Evidence from the Mexican Revolution,” Harvard University Economics Department, Manuscript. Donaldson, Dave. 2018. “Railroads of the Raj: Estimating the Impact of Transportation Infrastructure.” American Economic Review , 108 (4-5): 899-934. Drèze, Jean. 1991. “Famine Prevention in India”, in Drèze, Jean; Sen, Amartya (eds.), The Political Economy of Hunger: Famine prevention Oxford University Press US, pp. 32–33. Dutt, R. C. 1902, 1904, 2001. The Economic History of India Under Early British Rule. From the Rise of the British Power in 1757 to the Accession of Queen Victoria in 1837 . London: Routledge. Durbin, James. 1954. “Errors in Variables”. Revue de l’Institut International de Statistique / Review of the International Statistical Institute , 22(1) pp. 23-32. Ewbank, R. B. 1919. “The Co-Operative Movement and the Present Famine in the Bombay Presidency.” Indian Journal of Economics 2 (November): 477–88. FAOSTAT. 2018. FAOSTAT Data. Faostat.fao.org, Food and Agriculture Organization of the United Nations. Fieldhouse, David. 1996. “For Richer, for Poorer?”, in Marshall, P. J. (ed.), The Cambridge Illustrated History of the British Empire , Cambridge: Cambridge University Press. Pp. 400, pp. 108–146. Goldberger, Arthur S. 1964. “Classical Linear Regression”. Econometric Theory . New York: John Wiley & Sons. Pp. 164-194. Gooch, Elizabeth. 2017. “Estimating the Long-Term Impact of the Great Chinese Famine (1959-61) on Modern China.” World Development 89 (January): 140–51. Gualtieri, Giovanni, Marcella Nicolini, and Fabio Sabatini. 2019. “Repeated Shocks and Preferences for Redistribution.” Journal of Economic Behavior and Organization 167(11): 53–71. Henderson, J. Vernon, Adam Storeygard, and David Weil. 2011. “A Bright Idea for Measuring Economic Growth.” American Economic Review. Hochrainer, S. 2009. “Assessing the Macroeconomic Impacts of Natural Disasters: Are there Any?” World Bank Policy Research Working Paper 4968. Washington, DC, United States: The World Bank. Holden, Stein T. and Hosaena Ghebru. 2016. “Land tenure reforms, tenure security and food security in poor agrarian economies: Causal linkages and research gaps.” Global Food Security 10: 21-28. Hoyle, R. W. 2010. “Famine as Agricultural Catastrophe: The Crisis of 1622-4 in East Lancashire.” Economic History Review 63 (4): 974–1002. Hu, Xue Feng, Gordon G. Liu, and Maoyong Fan. 2017. “Long-Term Effects of Famine on Chronic Diseases: Evidence from China’s Great Leap Forward Famine .” Health Economics 26 (7): 922–36. Huff, Gregg. 2019. “Causes and Consequences of the Great Vietnam Famine, 1944-5.” Economic History Review 72 (1): 286–316. Lima, Ricardo Carvalho de Andrade, and Antonio Vinicius Barros Barbosa. 2019. “Natural Disasters, Economic Growth and Spatial Spillovers: Evidence from a Flash Flood in Brazil.” Papers in Regional Science 98 (2): 905–24. Maxwell, Daniel, and Keith Daniel Wiebe. 1998. Land tenure and food security: A review of concepts, evidence, and methods . Land Tenure Center, University of Wisconsin-Madison, 1998. McKean, Joseph W. 2004. “Robust Analysis of Linear Models”. Statistical Science 19(4): 562–570. Nguyen, Linh, and John O. S. Wilson. 2020. “How Does Credit Supply React to a Natural Disaster? Evidence from the Indian Ocean Tsunami.” European Journal of Finance 26 (7–8): 802–19. Pinkovsky, Maxim L. and Xavier Sala-i-Martin. 2016. “Lights, Camera, ... In- come! Illuminating the National Accounts-Household Surveys Debate,” Quarterly Journal of Economics , 131(2): 579- 631. Li, Q. and J.S. Racine. 2004. “Cross-validated local linear nonparametric regression,” Statistica Sinica 14: 485-512. Riggs, D. S.; Guarnieri, J. A.; et al. (1978). “Fitting straight lines when both variables are subject to error.” Life Sciences . 22 : 1305–60. Romer, P. M. 1994. “The Origins of Endogenous Growth”. The Journal of Economic Perspectives . 8 (1): 3–22. Roy, Tirthankar. 2006. The Economic History of India, 1857–1947 . Oxford U India. Print. Ruppert, David, Wand, M.P. and Carroll, R.J. 2003. Semiparametric Regression . Cambridge University Press. Print. Salibian-Barrera, M. and Yohai, V.J. .2006. A fast algorithm for S-regression esti- mates, Journal of Computational and Graphical Statistics 15(2): 414-427. Scholberg, Henry. 1970. The district gazetteers of British India: A bibliography. University of California, Bibliotheca Asiatica 3(4). Sharma, Ghanshyam, and Kurt W. Rotthoff. 2020. “The Impact of Unexpected Natural Disasters on Insurance Markets.” Applied Economics Letters 27(6): 494–97. Solow, Robert M. 1957. “Technical change and the aggregate production function.” Review of Economics and Statistics. 39 (3): 312–320. Srivastava, H.C. 1968. The History of Indian Famines from 1858–1918 , Sri Ram Mehra and Co., Agra. Print. Staiger, Douglas, and James H. Stock. 1997. “Instrumental Variables Regression with Weak Instruments.” Econometrica 65(3): 557-586. Thompson, Kristina, Maarten Lindeboom, and France Portrait. 2019. “Adult Body Height as a Mediator between Early-Life Conditions and Socio-Economic Status: The Case of the Dutch Potato Famine, 1846-1847.” Economics and Human Biology 34 (August): 103–14. Varghese, Ajay. 2019. “Colonialism, Landlords, and Public Goods Provision in India: A Controlled Comparative Analysis”. The Journal of Development Studies , 55(7), pp. 1345-1363. Wang, Chunhua. 2019. “Did Natural Disasters Affect Population Density Growth in US Counties?” Annals of Regional Science 62 (1): 21–46. World Bank. 2011. “India Country Overview.” Worldbank.org Previous Next

  • The European Union Trust Fund for Africa: Understanding the EU’s Securitization of Development Aid and its Implications | brownjppe

    The European Union Trust Fund for Africa: Understanding the EU’s Securitization of Development Aid and its Implications Migena Satyal Author Jason Fu Sophie Rukin Editors Abstract Migration policies in the European Union (EU) have long been securitized; however, the 2015 migration crisis represented a turning point for EU securitization of development aid to shape migration outcomes from various African countries. In 2015, the European Union Emergency Trust Fund for Stability and Addressing Root Causes of Irregular Migration and Displaced Persons in Africa (EUTF) was created at the Valletta Summit on Migration to address the drivers of irregular migration such as poverty, poor social and economic conditions, weak governance and conflict prevention, and inadequate resiliency to food and environmental pressures. The duration of this fund was from 2016-2021. Central to the strategy of the EUTF was addressing “root causes” however, the fund came with security dimensions. Under its objective of improved migration management, the EU directed capital to various security apparatuses in Africa to limit the movement of irregular migrants and prevent them from reaching Europe. This method diverted aid from addressing the existing problems faced by vulnerable populations in the region and contributed to practices and organizations that are responsible for implementing coercive measures to limit movement of migrants and committing human rights abuses. This paper examines the political and ideological motives and objectives behind the EU's securitization of development financing via the EUTF, how it has strategically used the “root causes'' narrative to secure these arrangements, and the ways in which this pattern of interaction is inherently neo-colonial. Introduction: The European Union Trust Fund for Africa (EUTF) The European Union Emergency Trust Fund for Stability and Addressing Root Causes of Irregular Migration and Displaced Persons in Africa (EUTF for Africa) was passed in November 2015 at the Valletta Summit on Migration where European and African heads of state met to address the challenges and opportunities presented through the 2015 migration crisis. African and European heads of state recognized that migration was a shared responsibility between the countries of origin, transit, and destination. They were joined by the African Union Commission, the Economic Community of West African States, states parties to the Khartoum and Rabat Process, the Secretary General of the United Nations, and representatives of the International Organization for Migration. The Valletta Summit identified the root causes of irregular migration and forced displacement which became the guiding narrative to create and implement the EUTF. The Action Plan of the Summit stated, “the Trust Fund will help address the root causes of destabilization, forced displacement, and irregular migration by promoting economic and equal opportunities, strengthening the resilience of vulnerable people, security, and development.” Therefore, addressing these issues via development aid would limit irregular migration. The European Commission claimed that “demographic pressure, environmental stress, extreme poverty, internal tensions, institutional weaknesses, weak social and economic infrastructures, and insufficient resilience to food crises, as well as internal armed conflicts, terrorist threats, and a deteriorated security environment” needed to be addressed within the EUTF framework. However, the root cause narrative itself was partially based on assumption rather than empirical evidence. Economic data analyzing the correlation between economic development aid and migration show that the two variables have an inverse relationship. Economic and human development increase peoples’ ambitions, competencies, and resources, encouraging them to emigrate. Migration has a downward trend only when a country reaches an upper-middle income level. This concept is also known as a migration hump. Although EU officials were aware of this phenomenon, they ignored the underlying issues of the root causes narrative and proceeded to create the fund. Between 2016 and 2022 the EUTF dispersed approximately EUR 5.0 billion across 26 African countries in the Sahel and Lake Chad, North Africa, and the Horn of Africa. This funding was on top of pre-existing EUR 20 billion annual aid from the EU to these geographical regions. Despite packaging the EUTF as development aid and extracting the money almost exclusively from the European Development Fund (EDF), which specifically targets economic, social, and cultural development programs, the EUTF fell within the 2015 European Agenda on Migration, introducing a security dimension to development financing. The EU and African partner countries used a significant amount of aid from the EUTF to bolster migration management initiatives via the funding and strengthening of security apparatuses that are responsible for targeting migrants within Africa, before they could embark on their journeys to European states. Under the EUTF, improved migration management constitutes “contributing to the development of national and regional strategies on migration management, containing and preventing irregular migration, and fight against trafficking of human beings, smuggling of migrants and other related crimes, effective return and readmission, international protection and asylum, legal migration, and mobility.” It includes increasing logistical capabilities by providing capital to train border agents, and bolstering surveillance infrastructure to monitor citizens’ movement, and expanding logistical capacities. In some cases, it also relies on encouraging certain policies in recipient countries to align with the priorities of the donor countries. As shown in EUTF annual reports (Figures 1.1-1.6), there was an increasing diversion of capital towards funding migration management projects in Africa, which came at the expense of economic development projects. By using aid to fund security goals, the EU securitized and politicized development financing. Securitization in migration policy refers to the externalization and extra-territorialization of migration control through border controls and reclassification of various activities like drug trafficking, illegal immigration, and delinquency of migrants as national security concerns. Still, some EUTF funding went towards projects geared at economic development. As stated in the Action Plan and shown in subsequent annual reports, the EUTF implemented programs that promoted job creation, education, entrepreneurship, and building resiliency. However, they also used the money from the development package to strengthen migration management initiatives and shift responsibilities to third countries in Africa, ultimately creating “legal black holes” where European norms about human rights did not apply. Despite the clear evidence of the EU’s contribution to abuses towards African irregular migrants, the EU continues to implement repressive policies through various externalization mechanisms and faulty narratives that have been empirically proven to not work – such as the root causes narrative – in order to further its own interests in the African continent. Research Question The practice of funneling capital toward security-related migration management projects raises the following question: Why has the EU opted to securitize its development aid through EUTF in the aftermath of the 2015 migration crisis? Furthermore, what are the implications of aid securitization in terms of development aid effectiveness, human rights practices, and the EU’s external legitimacy as a normative actor? To answer these overarching questions and understand the promotion and proliferation of migration policies through pacts like the EUTF requires an inward look into the European Union and its political and ideological interests in the migration policy domain. Therefore, I propose that the EUTF was a neo-colonial mechanism through which European member states could further their migration policy priorities into certain African states thereby reinforcing their colonial legacy hierarchies. Methodology First, I will provide background information about the EUTF, highlighting its objectives and strategies for development aid implementation and effectiveness. Then, I will provide quantitative data regarding the dispersion of money from the EUTF to show the increasing investments toward migration management schemes. Understanding these specificities and inherent challenges of the EUTF will contextualize my hypotheses. Next, I will support my hypothesis through case studies of specific EUTF security operations in African countries, analysis of the EU’s previous migration policies, interviews with African and European Union stakeholders about EUTF’s development and impact, and various theories to help explain how the EU navigates its migration policies. Finally, I will assess the implications of aid securitization in both Europe and Africa. My research will rely on official documents from the EU about its migration agenda and policies. It will also use data from academic journals and previous literature that have examined the trajectory of the EU’s migration-development nexus, specifically through the EUTF. Assessing the current nature of the EU’s migration policies will be useful in helping guide future policies. As migration becomes an increasingly salient issue, it is crucial to determine strategies or “best practices” that are humane and sustainable to address it. Adhering to human rights norms should be at the center of these policies. Background The Action Plan of the Valletta Summit was based on five priority domains: Reducing poverty, advancing socio-economic development, promoting peace and good governance. Facilitating educational and skills training exchanges between EU and EU member states as well as the creation of legal pathways of employment for migrants and returnees. Providing humanitarian assistance to countries needing food assistance, shelter, water, and sanitation. Fighting against irregular migration, migrant smuggling, and trafficking. Facilitating the return, readmission, and reintegration of migrants. During Valletta, Martin Schulz, the former President of the European Parliament stated, “By boosting local economies through trade, for example through economic partnership agreements and through ‘aid for trade’ programs, by investing in development and by enhancing good governance people will be enabled to stay where they want to be ‘at home.’” He reiterates that the purpose of the EUTF is not “fight the migrants” but rather, “fight the root causes of migration: poverty and conflict.” This seemingly proactive approach underscores the belief that addressing the primary drivers of migration by promoting development measures will empower people to remain in their respective countries by choice rather than feeling compelled to migrate elsewhere. “Root Causes”: Overlooking Evidence The problem with the EU’s understanding and use of the “root causes” narrative is that it ignores how wage differentials contribute to migratory patterns. Wage differentials refers to the discrepancy in wages for similar jobs due to factors like industry or geography. While development aid can be effective, it is not enough to redistribute wealth and address the deep structural inequalities of the global economy that drive migration to more developed and wealthier countries. Subsequent sections will elaborate further on the adoption of the root causes framing. EUTF Annual Aid Reports (2016-2022) As stated in the Valletta Summit political declaration, the EU was committed to “address the root causes of irregular migration” through the EUTF. However, aid allocation data (Figures 1-1.6) from EUTF annual reports, which highlight the distribution of aid in amount and percentage terms by geographical window and five of the EUTF’s objectives, show an increased prioritization of implementing migration management schemes at the expense of development projects between 2016 and 2022. In 2016 (Figure 1), when EUTF was in the implementation phase, EU officials distributed significantly more funds for economic development projects across North Africa, the Sahel, and Horn of Africa than any other domains which aligned with the root causes narrative that was emphasized at Valletta. In 2017 (Figure 1.1), the allocation for improved migration management significantly increased across the three regions. In North Africa, funding for economic development, strengthening resilience, and conflict prevention was eliminated while EUR 285 million was given to migration management. This pattern is strategic due to the geographic proximity of the region to southern European borders. In 2018 (Figure 1.2), North Africa remained the biggest recipient of migration management funds but did not receive funding for development projects. In 2019 (Figure 1.3), 31.56 percent of total funding was invested in migration management. In 2020 (Figure 1.4), 2021 (Figure 1.5), and 2022 (Figure 1.6), improved migration management projects continued to receive most funding at the expense of other objectives. The funding patterns outlined in these reports show the EU’s increased focus towards its migration objectives. Figure 1: EUTF Projects Approved in 2016 Figure 1.1: EUTF Projects Approved in 2017 Figure 1.2: Projects Approved in 2018 Figure 1.3: Projects Approved in 2019 Figure 1.4: Projects approved in 2020 Figure 1.5: Projects approved in 2021 Figure 1.6: Projects approved in 2022 Taking the background information and data into account, I will prove my hypothesis, explaining why the EU increasingly invested in migration management projects in the following sections. Defining Neo-Colonialism The concept of ‘neo-colonialism; was coined by Kwame Nkrumah’s Neo-Colonialism: The Last Stage of Imperialism, in which he argues that neo-colonialism is a contemporary form of colonialism that is perpetuated through less traditionally coerciece methods, such as development aid. This theory can be applied when assassing relations and interdependency between former colonial states with formerly colonized states. Interdependence is manufactured by former colonial powers that “[give] independence” to their subjects, only for them to follow up by allocating aid.” They speak about guaranteeing independence and liberation but never implement policies to preserve them in an effort to maintain their influence and objectives via unobstrusive and monetary means rather than directly coercive ones. As a result, these countries’ economic system, and thus their political policy, is “directed from outside” through foreign capital.” EUTF as a Neo-Colonial Instrument In the 19th and 20th centuries, European powers reshaped all aspects of African society, through colonialism, for their own strategic imperatives. These included, but were not limited to, extraction of material resources, manufactured dependency, and assertion of European institutions and policies at the expense of indigenous cultures and institutions. The complete overhaul of pre-colonial Africa interrupted economic and political development in the region and led to its continued structural subordination despite achieving independence from European colonial states in the 21st century. As a result, the repercussions of colonialism have contemporary implications in EU-Africa relations. During the colonial era, colonial powers used military power and additional forms of coercive strategies to assert foreign influence; currently, former colonial powers capitalize on the weaknesses of African countries and use political and economic measures to gain influence. Colonialism never disappeared, but rather, evolved into neo-colonialism. This concept is demonstrated in the framework of the EUTF which, despite being a development aid package and product of a seemingly coordinated multilateral process, imposed conditionalities and security measures on African states to achieve political goals in the field of migration. Under the EUTF, patterns of cooperation between European countries and their former colonies to limit migration are also prevalent, especially in the case of Libya and Niger. These initiatives safeguard colonial-era power structures and undermine the sovereignty of the respective African states. The EU took advantage of its status as a donor institution through three mechanisms that enforced hierarchies between African and European powers: The governance structure,designed to dismiss African stakeholder engagement EU’s imposition of positive and negative conditionalities to certain African states The strategic partnerships between European and African states to implement migration management programs These steps demonstrate the EU’s broader goals to assert their influence in the region’s migration policies by implementing security schemes, jeopardizing the needs of African states and the preservation of human rights in the process. The use of EUTF to conduct such projects signals the “de facto policy purchase” on the African government’s stance on migration. Consequently, African states become an “instrument” for European neo-colonial policies, especially in the migration domain. Eurafrica to Modern EU-Africa Relations The legacy and discourse of colonialism and neo-colonialism are not equal among EU member states. Many European countries were colonial powers, with the exception of Ireland and Malta, along with several central European countries that were subjugated to the authority of larger imperial powers. However, specific past actions hold little significance when discussing the broader nexus between European integration, the European Union, and colonialism. In Eurafrica: The Untold History of European Integration and Colonialism , Peo Hansen and Stefan Jonsson argue that there was a vast overlap between the colonial and European projects. Several African countries, under colonialism, have historically played a key role in efforts towards European integration and unity from the 1920s to the 1950s under the geo-political concept of Eurafrica. According to this idea, European integration would only occur through “coordinated exploitation of Africa and Africa could be efficiently exploited only if European states cooperated and combined their economic and political capacities.” The pan-European movement in the interwar period was based on conditions for peace through a “united colonial effort” in Africa. Eurafrica turned into a political reality with the emergence of the European Economic Community (EEC) made up of Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany, along with colonial possessions that were referred to as “overseas countries and territories” (OCTs). For the EEC, Africa served as a “necessity,” “a strategic interest,” “an economic imperative,” “a peace project,” “a white man’s burden,” and “Europe’s last chance.” Put differently, “Africa was indispensable for Europe’s geopolitical and economic survival.” Africa became the guiding force of European integration and Eurafrica became a system through which colonial powers could preserve their empires. Eurafrica, in its original form, did not materialize because African countries took back control from European colonial powers, but its legacy is crucial to the development of the EEC and modern EU-Africa relations. Today, the EU describes its relationship with Africa in terms like “interdependence,” and “partnership of equals.” Nonetheless, the EU’s colonial past still plays a significant role in its foreign policy with Africa as it promotes the adoption of European rules and practices in its “normative empires.” The continuation of these empires has cemented core-periphery dynamics of interaction, which ultimately advances European interests, especially in the migration domain. Specifically, the EU’s externalization of border and migration management efforts to transfer the European model of governance to third countries have transformed them into “southern buffer zones” to curtail unwanted migration and enhance Europe’s sense of security. Such measures demonstrate the separation of physical borders from functional regimes in Europe’s fluid borderlands, which are antecedent to imperial practices when control was extended beyond territorial boundaries. These practices are evident in the EU’s security operations through pacts like the EUTF, EU-Turkey Deal, and Operation SOPHIA. These externalization policies ensure the continuity of the vision derived from the Eurafrica project in the 21st century. Conditional aid The EUTF was conditional as it leveraged development aid to finance security-related migration projects and imposed positive and negative conditionalities that were used as leverage for African cooperation. When the European Commission announced its Migration Partnership Framework in 2016, it stated that development and trade policies will use positive and negative conditionalities to encourage cooperation on EU’s migration management projects. The “more for more, less for less” framework embedded into development financing means that “African governments use migration cooperation as a bargaining chip for procuring finance through renting inherent powers of state sovereignty to control entry and exit.” This coercive and concessional method contradicts the nature of cooperation that was emphasized at the Valletta Summit in 2015 and undermines the autonomy of the African states as these conditionalities perpetuate neo-colonial practices. EUTF Governance Structure and Oversight The EUTF was a product of a multilateral decision-making process. However, its governance structure, which limits proper stakeholder engagement from African representatives, signals the EU’s push to prioritize its policies over development in Africa. The European Commission claims that it is taking a bottom-up approach where the EU delegations play a key role in identifying and formulating the EUTF through consultations and dialogues to build partnerships with local stakeholders (civil society organizations, national and local authorities, and representatives). Subsequently, proposals are created by the EUTF for African teams based on EU Commission Headquarters and EU delegations. Then, the proposal is submitted to the Operational Committee for approval. Once approved, the proposals are implemented via EU member states’ authorities, developmental and technical cooperation agencies, civil society organizations, international or UN organizations, and private sector entities. The governance of the EUTF is dependent upon the Strategic Board and Operational Committees for each of the three regions where the EUTF distributed funds. The Strategic Board is responsible for “adopting the strategy of the EUTF, adjusting the geographical and thematic scope of the EUTF in reaction to evolving issues, and deciding upon amendments to the guiding document establishing the internal rules for the EUTF.” The board is chaired by the European Commission and composed of representatives and contributing donors. The Operational Committee is responsible for “reviewing and approving actions to be financed, supervising the implementation of the actions, and approving the annual report and accounts for transmission to the Strategic Board.” In the Board and the Committee, the African partner countries can only act as observers and do not hold decision-making powers. This management framework is ineffective as it is designed to limit the participation of African parties that have more comprehensive knowledge regarding the needs of the continent and areas where funds need to be directed. However, they are structurally silenced. The classification of the EUTF as development aid from the EU to Africa also provided a loophole under which parliamentary oversight was not required. The European Development Fund, which operates outside the EU budget, funded most of the aid, bypassing conventional parliamentary procedures, allowing for swift implementation of the fund. As a spokesperson for the European Commission’s DG DEVCO claimed that simplifying the procedures allows for more flexibility so projects can be implemented earlier. Proponents of the fund believe that the easy implementation process is what makes it advantageous. However, opponents of the fund like Elly Schelien, a member of the European Parliament’s Development Committee, claimed that the EU Parliament has not been given “the right democratic scrutiny” of the fund. The framing of the fund as an “emergency instrument” led to retracted bureaucratic measures to increase effectiveness as project cycles were much shorter than traditional development programming. The consolidation of power to the EU institutions and representatives meant that EUTF projects were “identified at the country level under the leadership of the EU Delegations, discussed and selected by an Operational Committee.” Engagement from African stakeholders and civil society was not required. An interview with a representative from the Operational Committee revealed that EUTF “projects were simply approved without discussion. Negotiations took place upstream between EUTF managers, European agencies, EU delegations, and partner countries.” This form of decision-making amplifies hierarchical structures between European and African representatives. Strategic Partnerships Certain EU member states partnered with African states to implement migration management programs in which they exercised authority over the movement of migrants within Africa, especially in the origin and transit countries. Not only do these policies directly conflict with the EU’s stated commitments regarding development aid and cooperation with partner countries, but the EU’s agenda is antecedent to European empires leveraging local African officials to undertake security operations in the continent. Today, this exploitative relationship is parallel to the EU’s allocation of capital, military equipment, and capacity-building instruments to African representatives who adhere to the needs of EU leaders. This pattern is visible in various projects and funding executed under the EUTF. Though reluctant to enter into such agreements with Europe, African policymakers are forced into a “perpetual balancing act, juggling domestically-derived interests with the demands of external donor and opportunity structures.” This concession stems from inherent power asymmetry between relatively weak and powerful states, upholding colonial legacy hierarchies. Case Studies on Libya and Ethiopia In the following section, I use Libya and Ethiopia as case studies to provide evidence that EUTF’s prioritization of funding migration management projects, increasing policing and surveillance in these countries, and imposing positive and negative conditionalities are reflective of neo-colonial practices to assert dominance over the movement of African irregular migrants. I chose these countries to study because each one falls within one of the two geographical windows and serves either as a popular departure or transit country where the European Union is heavily involved in migration management projects. Libya Libya is a major departure country for migrants from West African countries of origin such as Nigeria, Guinea, Gambia, Ivory Coast, Mali, and Senegal. Italy demonstrated strategic interest in Libya due to its geographical proximity and colonial legacy. Between 2017 and 2022, the Italian Ministry of Interior (MI) led implementations of various migration management projects that sought to curb the arrival of migrants into Italy. In 2017, MI led the first phase of its project called “Support to Integrated Border and Migration Management in Libya” with a budget of EUR 42.2 million and a EUR 2.23 million co-financing from Italy. The principal objective of this phase was migration management. Focus areas included strengthening border control, increasing surveillance activities, combatting human smuggling and trafficking, and conducting search and rescue operations. The second phase of this project was launched by MI in 2018 until 2024 for EUR 15 million. This phase was focused on capacity-building activities and institutional strengthening of authorities such as the Libyan Coast Guard and the General Administration of Coastal Security. It also advanced the land border capabilities of relevant authorities and enhanced search and rescue (SAR) capabilities by supplying SAR vessels and corresponding maintenance programs. The beneficiaries of this project included 5,000 relevant authorities from the Libyan Ministry of Interior (MoI), Ministry of Defense (MoD), and Ministry of Communications. The indirect beneficiaries include “future migrants rescued at the sea due to the procession of life-saving equipment to Libyan Coast Guard and General Administration for Coastal Security for them to be able to save lives.” Italy’s actions under the EUTF compromise the proper use of development financing tools by diverting them for the use of security-related projects. Its engagement and strengthening of Libyan security apparatuses such as the Libyan Coast Guard also undermine the values of human rights that EU member states claim to promote in their foreign policies as the Libyan Coast Guard is notorious for violating non-refoulment principles and committing human rights violations such as extortion, arbitrary detention, and sexual violence against migrants and asylum seekers. Recognizing brutal actions by the border authorities and the deplorable living conditions in detention centers in Libya, the Assize Court in Milan condemned the torture and violence inflicted in these centers. In November 2017, the UN High Commissioner on Human Rights released a statement criticizing the EU’s support for the Libyan Coast Guard as “inhumane” as it led to the detention of migrants in “horrific” conditions in Libya. Despite institutional disapproval of the EU’s and Italy’s involvement in Libya, funding for these security projects continued. Ethiopia While Ehtiopia was never formally colonized, it remained under Italian occupation from 1935-1941 and subsequently fell under (in)formal British control from 1941-1944. The EUTF initiatives in Ethiopia do not show the same patterns of cooperation as seen in Libya and Niger, since Ethiopia served as a key interest to the EU due to its status as one of the main countries of origin, transit, and destination for migrants and refugees. EUTF report from 2016 highlighted that Ethiopia hosts over one million displaced people. It is also the biggest recipient of EUTF funding in the Horn of Africa. Its geographical proximity to countries like Eritrea, Somalia, and South Sudan has vastly affected its migration demographics, making it a focus area for the EU’s development aid under the EUTF. While there pre-existing migration management schemes in Ethiopia, they were concerned with the returns and reintegration of irregular Ethiopian migrants and refugees rather than building up the capacity of various security actors as seen in other regions. This objective was linked with positive conditionalities as the Third Progress Report on the Partnership Framework with third countries under the European Agenda on Migration links progress in the returns and readmissions field with more financial support for refugees that reside within Ethiopia. Additional projects in Ethiopia were geared towards economic development and focused on addressing the root causes as outlined in Valletta. Some of these initiatives included job creation, providing energy access, healthcare, and education to vulnerable populations which are in line with development cooperation. However, the European Union’s increasing focus on returns and readmission of Ethiopian migrants can decrease revenue derived from remittances which contribute three times more to the Ethiopian economy than development financing. This approach ensures the fulfillment of the EU’s migration interests while undermining Ethiopia’s economic needs. Ethiopian officials also expressed disappointment with the EUTF measures because they were guided by the EU’s focus on repatriation, thereby eroding migration cooperation with Ethiopia. In regards to EU interests in Ethiopia, an EU official claimed: “We can pretend that we have joint interest in migration management with Africa, but we don't. The EU is interested in return and readmission. Africa is interested in root causes, free movement, legal routes, and remittances. We don't mention that our interests are not aligned.” This non-alignment in interests is irrelevant to the EU because it is the more dominant actor and has the power to assert its priorities by using money as leverage. However, this pattern of interaction comes at the cost of losing cooperation with Ethiopian stakeholders and diverging finances from refugee and migrant populations in Ethiopia who need humanitarian assistance. Perspectives from Africa African representatives and ambassadors displayed suspicion about the fund’s motives and called on the EU to fund projects that increase economic opportunities in their respective countries. As Nigerien mayor of Tchirozerine Issouf Ag Maha stated, “as local municipalities, we don’t have any power to express our needs. The EU and project implementers came here with their priorities. It’s a ‘take it or leave it’ approach, and in the end, we have to take it because our communities need support.” Maha’s statement highlights the role the EU plays in shaping the direction of development money and how its priorities overshadow decisions and input from local officials, who are significantly more knowledgeable about the needs of their communities. Despite diverging interests and priorities, African officials concede to their demands because their communities require financial resources to alleviate hardships. President Akufo-Addo from Ghana claimed that “ instead of investing money in preventing African migrants from coming to Europe, the EU should be spending more to create jobs across the continent.” Similarly, Senegalese President Mackey Sall and former Chairperson of the African Union warned that the trust fund to tackle the causes of migration is not sufficient to meet the needs of the continent stating, stated that “if we want young Africans to stay in Africa, we need to provide Africa with more resources.” The allocation of aid to security-related projects comes at the expense of funding genuine development projects that align with the needs of African communities. It also takes advantage of the ‘cash-starved’ governments.” These statements underscore the necessity of the EUTF to direct capital towards structural and sustainable economic development as opposed to combatting, detaining, or returning migrants. However, the EU has not been responsive to these inputs from its African stakeholders despite stressing the importance of cooperation and partnership during the Valletta Summit. Reinforcing Power Imbalances The imposition of European policies and priorities through the EUTF takes advantage of African nations' relatively weaker economic standing and agency, showing that the political and security needs of powerful states and institutions determine where and how development aid is designated. It also shows the continued influence and intervention of European interests into their ostensibly independent former colonial holdings, therefore reiterating Nkrumah’s theory that foreign capital, such as development aid, can be used for the exploitation of developing countries by their former colonial powers. This hypocrisy goes against the EU’s normative approaches to its foreign policy while also continuing to reinforce power imbalances and colonial-era hierarchies between Europe and Africa. Discussion Critically examining the European Union Trust Fund in the broader context of EU-Africa relations demonstrates how EUTF represents a complex intersection of historical legacies, political interests and expediency, and political ideologies that determine attitudes towards migrants and refugees and thus, shape policy outcomes. These factors reinforce each other by showing the multifaceted nature of migration governance. The neo-colonialism lens in my hypothesis provides historical context to show how enduring colonial legacies continue to guide policies today. This lens also forms the basis for discourse about EU-Africa relations because of the visible power imbalances that are sustained through policies like the EUTF which are structurally designed to achieve European political interests at the expense of the needs of African states. As seen through the case studies on Libya, Niger, and Ethiopia, development aid is not always allocated for the benefit of the recipient. Rather, aid can be abused as a political tool to reach the objectives of the donor institutions. Despite the rhetoric of cooperation between stakeholders, preservation of human lives, equal partnership, and addressing root causes, as stated in Valletta, the strategic policy design of the EUTF highlights the persistence of neo-colonialism because it continues historical patterns of exploitation and hierarchies between Europe and Africa. Conclusion The findings in this paper show that EUTF was not merely a development instrument but also a political one that came with negative consequences for African irregular migrants. The securitization of aid along with the EU’s other externalization policies have not effectively solved the problems that have caused the migration crisis. Instead, it has reinforced them. The model of the EU’s migration policies under the EUTF has also created issues beyond the realm of migration. As discussed, it continues to sustain power imbalances between Europe and Africa, shift aid priorities, and undermine development goals.Addressing the migration crisis will require a paradigm shift in the EU migration policy domain. The EU needs to deviate away from a security-based approach to a holistic and rights-based approach. This ideological reform requires the EU to look inward to address its own limitations and failures by recognizing its neo-colonial practices, acting out of mutual rather than political interests, and lastly, collectively humanizing migrants and refugees arriving to Europe for safety and opportunities. Through these measures, the EU and African stakeholders can address the true root causes of migration – which stem from structural global inequalities. References “A European Agenda on Migration.” European Commission. November 2015. https://www.consilium.europa.eu/media/21933/euagendafor-migration_trustfund-v10.pdf Abdelaaty, Lamis. “European countries are welcoming Ukrainian refugees. It was a different story in 2015.” The Washington Post. March 23, 2022. https://www.washingtonpost.com/politics/2022/03/23/ukraine-refugees-welcome-europe/ Abrahams, Jessica. “Red flags raised over governance of EU Trust Fund projects.” Devex. September 22, 2017. https://www.devex.com/news/red-flags-raised-over-governance-of-eu-trust-fund-projects-91027 “Agreement Establishing The European Union Emergency Trust Fund For Stability And Addressing Root Causes Of Irregular Migration And Displaced Persons in Africa, And Its Internal Rules.” Trust Fund for Africa. 2015. https://trust-fund-for-africa.europa.eu/document/download/4cb965d7-8ad5-4da9-9f6d-3843f4bf0e82_en?filename=Constitutive%20Agreement%20https://trust-fund-for-africa.europa.eu/document/download/4cb965d7-8ad5-4da9-9f6d-3843f4bf0e82_en?filename=Constitutive%20Agreement%20 Alcalde, Xavier. “Why the refugee crisis is not a refugee crisis.” International Catalan Institute for Peace. Accessed March 14, 2024. https://www.icip.cat/perlapau/en/article/why-the-refugee-crisis-is-not-a-refugee-crisis/ Allen, Peter. “French politician says country is 'white race' and immigrants should adapt or leave.” The Mirror. September 27, 2015. https://www.mirror.co.uk/news/world-news/french-politician-says-country-white-6528611 Bachman, Bart. “Diminishing Solidarity: Polish Attitudes toward the European Migration and Refugee Crisis.” Migration Policy. June 16, 2016. https://www.migrationpolicy.org/article/diminishing-solidarity-polish-attitudes-toward-european-migration-and-refugee-crisis Ball, Sam. “France’s far-right National Front tops first round of regional vote.” France24. July 12, 2015. https://www.france24.com/en/20151206-france-far-right-national-front-le-pen-tops-first-round-regional-election Boswell, C. “The ‘external dimension’ of EU immigration and asylum policy.” International Affairs , 79, no. 3 (2003): 619–639. https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2346.00326 Campbell, Zach.“Europe’s deadly migration strategy.” Politico. February 28, 2019. https://www.politico.eu/article/europe-deadly-migration-strategy-leaked-documents/ Cantat, Celine. “The ideology of Europeanism and Europe’s migrant other,” in International Socialism 152 ( October 2016). https://isj.org.uk/the-ideology-of-europeanism-and-europes-migrant-other/ Castillejo, Clare.“The EU Migration Partnership Framework: Time for a Rethink?” German Development Institute.” 2017. https://www.idos-research.de/uploads/media/DP_28.2017.pdf Chase, Jefferson. “AfD: From anti-EU to anti-immigration.” DW. October 28, 2019. https://www.dw.com/en/afd-what-you-need-to-know-about-germanys-far-right-party/a-37208199 Chebel d’Appollonia, Ariane. Frontiers of Fear: Immigration and Insecurity in the United States and Europe . Ithaca, NY: Cornell University Press, 2012 “CTR - BUDGET SUPPORT - Contrat relatif à la Reconstruction de l'Etat au Niger en complément du SBC II en préparation / Appui à la Justice, Sécurité et à la Gestion des Frontières au Niger.” European Commission. Accessed March 14, 2024. https://eutf.akvoapp.org/dir/project/5651 De Guerry, Olivia and Andrea Storcchiero. “Partnership or Conditionality? Monitoring the Migration Compacts and EU Trust Fund for Africa.”Concord Europe. 2018. https://concordeurope.org/wp-content/uploads/2018/01/CONCORD_EUTrustFundReport_2018_online.pdf “David Cameron: 'Swarm' of migrants crossing the Mediterranean.” BBC. July 30, 2015. https://www.bbc.com/news/av/uk-politics-33714282 European Commission. “Commission announces New Migration Partnership Framework: reinforced cooperation with third countries to better manage migration.” Media release. June 7, 2017. https://ec.europa.eu/commission/presscorner/detail/en/IP_16_2072 European Commission. “Fourth Progress Report on the Partnership Framework with third countries under the European Agenda on Migration.” no. 350. June 13, 2017. https://www.eeas.europa.eu/sites/default/files/4th_progress_report_partnership_framework_with_third_countries_under_european_agenda_on_migration.pdf European Council. “Remarks by President Donald Tusk at the press conference of the Valletta summit on migration.” Press release. November 12, 2015, https://www.consilium.europa.eu/en/press/press-releases/2015/11/12/tusk-press-conference-valletta-summit/ “EU Solidarity with Ukraine.” European Council. Accessed March 14, 2024. https://www.consilium.europa.eu/en/policies/eu-response-ukraine-invasion/eu-solidarity-ukraine/#:~:text=affordability%20(background%20information)-,Humanitarian%20aid,and%20host%20families%20in%20Moldova . “EU-Turkey joint action plan.” European Commission. October 15, 2015. https://ec.europa.eu/commission/presscorner/detail/en/MEMO_15_5860 Fanta, Esubalew B. “The British on the Ethiopian Bench: 1942–1944.” Northeast African Studies 16, no. 2 (2016): 67-96. https://www.jstor.org/stable/10.14321/nortafristud.16.2.0067 FitzGerald, David S. “Remote control of migration: Theorising territoriality, shared coercion, and deterrence.” Journal of Ethnic and Migration Studies , 46, no. 1 (2020): 4–22. https://www.tandfonline.com/doi/full/10.1080/1369183X.2020.1680115 Gray, Meral. Amanda, “Learning the lessons from EU-Turkey deal: Europe’s renewed test.” Accessed March 14, 2024. https://odi.org/en/insights/learning-the-lessons-from-the-euturkey-deal-europes-renewed-test/ Hansen, Peo and Stefan Jonsson. Eurafrica: The Untold History of European Integration . London: Bloomsbury Publishing, 2014. “International Affairs.” European Commission. Accessed March 14, 2024. https://home-affairs.ec.europa.eu/policies/international-affairs_en Islam, Shada.“Decolonising EU-Africa Relations Is a Pre-Condition For a True Partnership of Equals.” Center for Global Development. February 15, 2022, https://www.cgdev.org/blog/decolonising-eu-africa-relations-pre-condition-true-partnership-equals#:~:text=Senegalese%20President%20Mackey%20Sall%20who,need%20to%20provide%20Africa%20with Kabata, Monica and Jacobs, An. The ‘migrant other’ as a security threat: the ‘migration crisis’ and the securitising move of the Polish ruling party in response to the EU relocation scheme.” Journal of Contemporary European Studies , 13, no. 4 (November 13, 2022): 1223-1239 https://www.tandfonline.com/doi/full/10.1080/14782804.2022.2146072 Khakee, Anna. “European Colonial Pasts and the EU’s Democracy-promoting Present: Silences and Continuities.” Italian Journal of International Affairs 57, no 3, (2022): 103-120. https://www.tandfonline.com/doi/abs/10.1080/03932729.2022.2053352 Kirisci, Kemal. “As EU-Turkey migration agreement reaches the five-year mark, add a job creation element.” Brookings. March 17, 2021. https://www.brookings.edu/articles/as-eu-turkey-migration-agreement-reaches-the-five-year-mark-add-a-job-creation-element/ Kundnani, Hans. Eurowhiteness: Culture, Empire, and Race in the European Project. London: Hurst Publishers, 2023. Langan, Mark. Neo-colonialism and The Poverty of Development in Africa. Cham: Palgrave Macmillan, 2018. Lehne, Stefan. “How the Refugee Crisis Will Reshape the EU.” Carnegie Europe. February 4, 2016. https://carnegieeurope.eu/2016/02/04/how-refugee-crisis-will-reshape-eu-pub-62650 Liguori, Anna. Migration Law And Externalization of Border Controls. Abingdon: Routledge, 2019. Mager, Therese. “The Emergency Trust Fund for Africa: Examining Methods and Motives in the EU’s External Migration Agenda.” United Nations University on Institution on Comparative Regional Integration Studies. 2018. https://cris.unu.edu/sites/cris.unu.edu/files/UNU-CRIS%20Policy%20Brief%202018-2.pdf Mainwaring, Ċetta. “Constructing Crises to Manage: Migration Governance and the Power to Exclude.” At Europe’s Edge: Migration and Crisis in the Mediterranean. Oxford: Oxford Academic, 2019. https://academic.oup.com/book/32397/chapter/268689331 Maru, Mehari T. “Migration Policy-making in Africa: Determinants and Implications for Cooperation with Europe.” Working Paper 2021/54, European University Institute, 2021. https://cadmus.eui.eu/handle/1814/71355 Micinski, Nicholas and Kelsey Norman. Migration Management Aid, Governance, and Repression . Unpublished manuscript. Accessed March 14, 2024. “Mission.” EUNAVFOR MED Operation Sophia. Accessed March 14, 2024. https://www.operationsophia.eu/about-us/#:~:text=EUNAVFOR%20MED%20operation%20Sophia%20is,poverty%2C%20climate%20change%20and%20persecution . Moravcsik, Andrew. Review of Eurowhiteness: Culture, Empire, and Race in the European Project by Hans Kundnani. Foreign Affairs. October 23, 2023. https://www.foreignaffairs.com/reviews/eurowhiteness-culture-empire-and-race-european-project Nkrumah, Kwame . Neo-Colonialism, the Last Stage of Imperialism . London: Panaf, 1965. Oliviera, Ivo and Vince Chadwick.“Gabriel compares far-right party to Nazis.” Politico. October 23, 2015. https://www.politico.eu/article/sigmar-gabriel-compares-far-right-alternative-for-germany-afd-to-nazis-interview-rtl/ “Objective and Governance.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/our-mission/objective-and-governance_en Pacciardi, Agnese.“A European narrative of border externalization: the European trust fund for Africa story,” in European Security . January 22, 2024. https://www.tandfonline.com/doi/full/10.1080/09662839.2024.2304723?src=#:~:text=The%20EUTF%20narrative%20portrays%20diaspora,their%20countries%20of%20origin%2C%20so Pare, Celine.“Selective Solidarity? Racialized Othering in European Migration Policies.” Amsterdam Review of European Affairs 1:1. Pages 42-54. https://www.europeanhorizonsamsterdam.org/_files/ugd/79a695_dbd76026a17f488ea00cae358bfebe8d.pdf#page=47 Reilly, Rachael, and Michael Flynn. “The Ukraine Crisis Double Standards: Has Europe’s Response to Refugees Changed?” Media Release. March 2, 2022. https://reliefweb.int/report/ukraine/ukraine-crisis-double-standards-has-europe-s-response-refugees-changed Rieker, Pernille and Marianne Riddervold. “Not so unique after all? Urgency and norms in EU foreign and security policy.” Journal of European Integration 44, no 4 (September 21, 2021): 459-473. https://www.tandfonline.com/doi/full/10.1080/07036337.2021.1977293 Roberts, Bayard, Adrianna Murphy, and Martin Mckee. Europe’s collective failure to address the refugee crisis” in Public Health Reviews, 37, no 1 (2016): 1-5. https://publichealthreviews.biomedcentral.com/articles/10.1186/s40985-016-0015-6 Sahin-Mencutek, Zeynep, Soner Barthoma, N. Ela Gökalp-Aras & Anna Triandafyllidou, “A crisis mode in migration governance: comparative and analytical insights,” in Comparative Migration Studies 10, no 12 (March 21, 2022): 1-19. https://comparativemigrationstudies.springeropen.com/articles/10.1186/s40878-022-00284-2#ref-CR22 Santos, Mireia F. “Three lessons from Europe’s response to Ukrainian migration.” European Council on Foreign Relations. August 9, 2023. https://ecfr.eu/article/three-lessons-from-europes-response-to-ukrainian-migration/ Schacht, Kira. “EU uses development aid to strongarm Africa on migration.” European Data Journalism. April 13, 2022. https://www.europeandatajournalism.eu/cp_data_news/eu-uses-development-aid-to-strongarm-africa-on-migration/ Schulz, Martin.“Speech at the Valletta Summit on Migration.” Speech. Valletta, Malta. November 11, 2015. European Parliament. https://www.europarl.europa.eu/former_ep_presidents/president-schulz-2014-2016/en/press-room/speech_at_the_valletta_summit_on_migration.html Shields Martin, Charles, Benjamin Schraven and Steffen Angenendt. “More Development- More Migration? The “Migration Hump” and its Significance For Development Policy Cooperation with Sub-Saharan Africa.” German Development Institute. 2017. https://www.idos-research.de/en/briefing-paper/article/more-development-more-migration-the-migration-hump-and-its-significance-for-development-policy-co-operation-with-sub-saharan-africa/ Silver, Laura. “Populists in Europe – especially those on the right – have increased their vote shares in recent elections.” Pew Research Center. October 6, 2022. https://www.pewresearch.org/short-reads/2022/10/06/populists-in-europe-especially-those-on-the-right-have-increased-their-vote-shares-in-recent-elections/ Sojka, Aleksandra. “Supranational identification and migration attitudes in the European Union.” BACES Working Paper no. 02-2021. Barcelona Center for Migration Studies, 2021. “Strategy for Security and Development in the Sahel.” European Union External Action Service. European Union External Action Service. Accessed March 14, 2024. https://www.eeas.europa.eu/sites/default/files/strategy_for_security_and_development_in_the_sahel_en_0.pdf Support to Integrated border and migration management in Libya – First Phase.” Emergency Trust Fund for Africa. Accessed March 14, 2024, Africa, https://trust-fund-for-africa.europa.eu/our-programmes/support-integrated-border-and-migration-management-libya-first-phase_en “Support to Integrated border and migration management in Libya – Second Phase.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/our-programmes/support-integrated-border-and-migration-management-libya-second-phase_en Tawat, Mahama and Eileen Lamptey. “The 2015 EU-Africa Joint Valletta action plan on migration: A parable of complex interdependence.” International Migration 60, no. 6 (21 December 2020): 28-42. https://onlinelibrary.wiley.com/doi/10.1111/imig.12953 Trust Fund Financials.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/trust-fund-financials_en Tusk, Donald. “Valletta Summit on Migration.” Speech. Valletta, Malta. November 2015. European Council. https://www.consilium.europa.eu/en/meetings/international-summit/2015/11/11-12/ “Valletta Summit, 11-12 November 2015 Action Plan.” European Commission. Accessed March 14, 2023. https://www.consilium.europa.eu/media/21839/action_plan_en.pdf “What is the EU-Turkey deal?” Rescue. March 16, 2023. https://www.rescue.org/eu/article/what-eu-turkey-deal Zaun, Natascha and Olivia Nantermoz. “Depoliticising EU migration policies: The EUTF Africa and the politicization of development aid.” Journal of Ethnic and Migration Studies 49, no. 12 (May 2023): 2986-3004. https://www.tandfonline.com/doi/full/10.1080/1369183X.2023.2193711 Zaun, Natascha and Olivia Nantermoz. “The use of pseudo-causal narratives in EU policies: the case of the European Union Emergency Trust Fund for Africa.” Journal of European Public Polic y 29, no. 4 (February 28, 2021): 510-529. https://www.tandfonline.com/doi/full/10.1080/13501763.2021.1881583#:~:text=According%20to%20the%20Commission%20Decision,insufficient%20resilience%20to%20food%20crises'%2C “2016 Annual Report.” Trust Fund for Africa. 2016. https://trust-fund-for-africa.europa.eu/system/files/2018-10/eutf_2016_annual_report_final_en-compressed_new.pdf “2017 Annual Report.” Trust Fund for Africa. 2017. https://trust-fund-for-africa.europa.eu/document/download/1a5f88be-e911-4831-9c2a-3a752aa27f7e_en?filename=EUTF%202017%20Annual%20Report%20%28English%29.pdf “2018 Annual Report.” Trust Fund for Africa. 2018. https://trust-fund-for-africa.europa.eu/document/download/fb0737ce-3183-415a-905c-4adff77bfce3_en?filename=Annual%20Report%202018%20%28EN%29%20 “2019 Annual Report.” Trust Fund for Africa. 2019. https://trust-fund-for-africa.europa.eu/document/download/e340b953-5275-43e5-8bd3-af15be9fc17a_en?filename=EUTF%202019%20Annual%20Report%20%28English%29.pdf “2020 Annual Report.” Trust Fund for Africa. 2020. https://trust-fund-for-africa.europa.eu/document/download/4a4422e5-253f-4409-b25c-18af9c064ca1_en?filename=eutf-report_2020_eng_final.pdf “2021 Annual Report.” Trust Fund for Africa. 2021. https://trust-fund-for-africa.europa.eu/document/download/f3690961-e688-44de-9789-255875979c1b_en?filename=EUTF%202021%20Annual%20Report%20%28English%29 “2022 Annual Report.” Trust Fund for Africa. 2022. https://trust-fund-for-africa.europa.eu/document/download/f3690961-e688-44de-9789-255875979c1b_en?filename=EUTF%202021%20Annual%20Report%20%28English%29

bottom of page