top of page

Search Results

142 results found with an empty search

  • About Us | BrownJPPE

    Mission Statement Julian D. Jacobs '19 Daniel Shemano '19 Advisory Board Frequently Asked Questions CENTER FOR PHILOSOPHY, POLITICS, AND ECONOMICS Join jppe! The Brown University Journal of Philosophy, Politics, and Economics (JPPE) is a peer reviewed academic journal for undergraduate and graduate students that is sponsored by the Center for Philosophy, Politics, and Economics at Brown University. The JPPE aims to promote intellectual rigor, free thinking, original scholarship, interdisciplinary understanding, and global leadership. By publishing student works of philosophy, politics, and economics, the JPPE attempts to unite academic fields that are too often partitioned into a single academic discourse. In doing so, the JPPE aims to produce a scholarly product greater than the sum of any of its individual parts. By adopting this model, the JPPE attempts to provide new answers to today’s most pressing questions. Five Pillars of the JPPE 1.) Interdisciplinary Intellectualism: The JPPE is committed to engaging with an interdisciplinary approach to academics. By publishing scholarly work within the disciplines of philosophy, politics, and economics, we believe we are producing work that transcends the barriers of any given one field, producing a sum greater than its individual parts. 2.) Diversity: The JPPE emphasizes the importance of diversity in the articles we publish, authors we work with, and questions we consider. The JPPE is committed to equal opportunities and creating an inclusive environment for all our employees. We welcome submissions and job applicants regardless of ethnic origin, gender, religious beliefs, disability, sexual orientation, or age. 3.) Academic Rigor: In order to ensure that the JPPE is producing quality student scholarship, we are committed to a peer review process, whereby globally renowned scholars review all essays prior to publication. We expect our submissions to be well written, well argued, well researched, and innovative. 4.) Free Thinking and Original Arguments: The JPPE values free thinking and the contribution of original ideas. We seek excellent arguments and unique methods of problem solving when looking to publish an essay. This is one way in which JPPE is hoping to contribute to the important debates of our time. 5.) Global Leadership: By publishing work in philosophy, politics, and economics, we hope the JPPE will serve as a useful tool for future world leaders who would like to consider pressing questions in new ways, using three powerful lenses.

  • Predictive Algorithms in the Criminal Justice System: Evaluating the Racial Bias Objection

    Rebecca Berman Predictive Algorithms in the Criminal Justice System: Evaluating the Racial Bias Objection Rebecca Berman Increasingly, many courtrooms around the U.S. are utilizing predictive algorithms (PAs). PAs are an AI that assigns risk [of future offending] scores to defendants based upon various data about the defendant, not including race, to inform bail, sentencing, and parole decisions with the goals of increasing public safety, increasing fairness, and reducing mass incarceration. Although these PAs are intended to introduce greater objectivity to the courtroom by more accurately and fairly predicting who is most likely to commit future crimes, many worry about the racial inequities that these algorithms may perpetuate. Here, I scrutinize and subsequently support the claim that PAs can operate in racially biased ways, providing a strong ethical objection against their use. Then, I raise and consider the rejoinder that we should still utilize PAs because they are morally preferable to the alternative: leaving judges to their own devices. I conclude that the rejoinder adequately, but not conclusively, succeeds in rebutting the objection. Unfair racial bias in PAs is not sufficient grounds to outright reject their use, for we must evaluate the potential racial inequities perpetuated by utilizing these algorithms relative to the potentially greater racial inequities perpetuated without their use. The Racial Bias Objection to Predictive Risk Assessment ProPublica conducted research to support concerns that COMPAS (a leading predictive algorithm used in many courtrooms) is unfairly racially biased. Its re- search on risk scores for defendants in Florida showed: a. 44.9% of black defendants who do not end up recidivating are mislabeled as “high risk” (defined as a score of 5 or above), while only 23.5% of white defendants who do not end up recidivating are mislabeled as “high risk.” b. 47.7% of white defendants who end up recidivating are mislabeled as “low risk,” while only 28% of black defendants who end up recidivating are mislabeled as “low risk” (1). Intuitively, these findings strike us as an unfair racial disparity. COMPAS’s errors operate in different directions for white and black defendants: disproportionately overestimating the risk of black defendants while disproportionately underestimating the risk of white defendants. In “Measuring Algorithmic Fairness,” Deborah Hellman further unpacks the unfairness of this kind of racialized error rate disparity: First, different directions of error carry different costs. In the criminal justice system, we generally view false positives, which punishes an innocent person or over-punishes someone who deserves less punishment, as more costly and morally troublesome than false negatives, which fails to punish or under-punishes someone who is guilty. The policies and practices we have constructed in the U.S. system reflect this view. Defendants are innocent until proven guilty, and there is a high burden of proof for conviction. Because of this, the judicial system airs on the side of producing more false negatives than false positives. Given the widely accepted view that false positives (punishing an innocent person or over-punishing someone) carry a greater moral cost than false negatives (failing to punish or under-punish- ing a guilty individual) in the criminal justice system, we should be especially troubled by black defendants disproportionately receiving errors in the false positive direction (2). A black defendant mislabeled as “high risk” may very well lead judges to impose a much longer sentence or post higher bail than fair or necessary, a cost that black defendants would be shouldering disproportionately (in comparison to white defendants) given the error rate disparity produced by COMPAS. Second, COMPAS’s lack of error rate parity is particularly problematic due to its links to structural biases in data used by PAs. Mathematically, a calibrated algorithm will yield more false positives in the group with a higher base rate of the outcome being predicted. PAs act upon data that suggest a much higher base rate of black offending than white offending, and this base rate discrepancy can reflect structural injustices: I. Measurement Error: Black communities are over-policed, so a crime committed by a black person is much more likely to lead to an arrest than a crime committed by a white person. Therefore, the measured difference of offending between black and white offenders is much greater than the real (statistically unknowable) difference in offending between black and white offenders, and PAs unavoidably utilize this racially biased arrest data (3). II. Compounding Injustice: Due to historical and ongoing systemic racism, black Americans are more likely to live in conditions, such as poverty, certain neighborhoods, and low educational attainment, that correlate with higher predicted criminal behavior. Therefore, if and when PAs utilize criminogenic conditions as data points, relatively more black offenders will score “high risk” as a reflection of past injustices (4). To summarize, data reflecting unfair racial disparities are necessarily incorporated into COMPAS’s calculations, so unfair racial disparities will come out of COMPAS predictions. For all of these reasons—the high cost of false positives, measurement error, and compounding injustice—lack of error rate parity is a morally relevant attack on the fairness of COMPAS. By being twice as likely to label black defendants that do not end up re-offending as “high risk” than white defendants, COMPAS operates in an unfairly racially biased way. Consequently, we should not use PAs like COM- PAS in the criminal justice system. Rejoinder to the Racial Bias Objection to Predictive Risk Assessment The argument, however, is not that simple. An important rejoinder is based on the very reason why we find such tools appealing in the first place: humans are imperfect, biased decision-makers. We must consider the alternative to using risk tools in criminal justice settings: sole reliance on a human decision-maker, one that may be just as susceptible, if not more, to racial bias. Due to historical and continuing forces in the U.S. creating an association between dark skin and criminality and the fact that judges are disproportionately white, judges are unavoidably in- grained with implicit or even explicit bias that leads them to perceive black defendants as more dangerous than their white counterparts. This bias inevitably seeps into judges’ highly subjective decisions. Many studies of judicial decision-making show racially disparate outcomes in bail, sentencing, and other key criminal justice decisions (5). For example: a. Arnold, Dobbie, and Yang (2018) find, “black defendants are 3.6 percentage points more likely to be assigned monetary bail than white defendants and, conditional on being assigned monetary bail, receive bail amounts that are $9,923 greater” (6). b. According to the Bureau of Justice Statistics, “between 2005 and 2012, black men received roughly 5% to 10% longer prison sentences than white men for similar crimes, after accounting for the facts surrounding the case” (7). Consequently, the critical and challenging question is not whether or not PAs are tainted by racial biases, but rather becomes: which is the “lesser of two evils” in terms of racial justice: utilizing PAs or leaving judges to their own devices? I will argue the former, especially if we consider the long-term potential for improving our predictive decision-making through PAs. First, although empirical data on this precise matter is limited, we have reason to believe that utilizing well-constructed PAs can reduce racial inequities in the criminal justice system. Kleinberg et al. (2017) modeled New York City pre-trial hearings and found that “a properly built algorithm can reduce crime and jail populations while simultaneously reducing racial disparities” (8). Even though the ProPublica analysis highlighted disconcerting racial data, it did not compare decision-making using COMPAS to decisions made by judges without such a tool. Second, evidence-based algorithms present more readily available means for improvement than the subjective assessments of judges. Scholars and journalists can critically examine the metrics and their relative weights used by algorithms and work to eliminate or reduce the weight of metrics that are found to be especially potent in producing racially skewed and inaccurate predictions. Also, as Hellman suggests, race can be soundly incorporated into PAs to increase their overall accuracy because certain metrics can be distinctly predictive of recidivism in white versus black offenders. For example, “housing stability” might be more predictive of recidivism in white offenders than black offenders (9). If an algorithm’s assessment of this metric were to occur in conjunction with information on race, its overall predictions would improve, reducing the level of unfair error rate dis- parity (10). Furthermore, PAs’ level of bias is consistent and uniform, while the biases of judges are highly variable and hard to predict or assess. Uniform bias is easier to ameliorate than variable, individual bias, for only one agent of bias has to be tackled rather than an abundance of agents of bias. All in all, there appear to be promising ways to reduce the unfairness of PAs—particularly if we construct these tools with a concern for systemic biases—while there currently does not appear to be ready means to better ensure a judiciary full of systematically less biased judges. The question here is not “which is more biased: PAs or judges?” but rather “which produces more racially inequitable outcomes: judges utilizing PAs or judges alone?” Even if improved algorithms’ judgments are less biased than those of judges, we must consider how the human judge, who is still the final arbiter of decisions, interacts with the tool. Is a “high risk” score more salient to a judge when given to a black defendant, perhaps leading to continued or even heightened punitive treatment being disproportionately shown towards black offenders? Simultaneously, is a “low risk” score only salient to judges when given to a white defendant, or can it help a judge overcome implicit biases to also show more leniency towards a “low risk” black offender? In other words, does utilizing this tool serve to exacerbate, confirm, or ameliorate the perpetuation of racial inequity in judges’ decisions? Much more empirical data is required to explore these questions and come to more definitive conclusions. However, this uncertainty is no reason to completely abandon PAs at this stage, for PAs hold great promise for net gains in racial equity because we can and should keep working to overcome their structural flaws. In conclusion, while COMPAS in its current form operates in a racially biased way, this factor alone is not enough to forgo the use of PAs in the criminal justice system: we must consider the extent of unfair racial disparities perpetuated by tools like COMPAS relative to the extent of unfair racial disparities perpetuated when judges make decisions without the help of a tool like COMPAS. Despite PAs’ flaws, we must not instinctively fall back on the alternative of leaving judges to their own devices, where human cognitive biases reign unchecked. We must embrace the possibility that we can improve human decision-making by using ever-improving tools like properly crafted risk assessment instruments. Endnotes 1 ProPublica, “Machine Bias.” 2 Hellman, “Measuring Algorithmic Fairness,” 832-836. 3 Ibid, 840-841. 4 Ibid, 840-841. 5 National Institute of Justice, “Relationship between Race, Ethnicity, and Sentencing Outcomes: A Meta-Analysis of Sentencing Research.” 6 Arnold, Dobbie, and Yang, “Racial Bias in Bail Decisions,” 1886. 7 Bureau of Justice Statistics, “Federal Sentencing Disparity: 2005-2012,” 1. 8 Kleinberg et al., “Human Decisions and Machine Predictions,” 241. 9 Corbett-Davies et al., “Algorithmic Decision Making and the Cost of Fairness,” 9. 10 Hellman, “Measuring Algorithmic Fairness,” 865. Bibliography Angwin, Julia, Jeff Larson, Surya Mattu, Lauren Kirchner. “Machine Bias.” Pro- Publica. May 23, 2016. https://www.propublica.org/article/machine-bi- as-risk-assessments-in-criminal- sentencing. Arnold, Savid, Will Dobbie, Crystal S Yang. “Racial Bias in Bail Decisions.” The Quarterly Journal of Economics 133 , no. 4 (November 2018): 1885–1932. https://doi.org/10.1093/qje/qjy012. Bureau of Justice Statistics, “Federal Sentencing Disparity: 2005-2012.” 248768. October, 2015. https://www.bjs.gov/content/pub/pdf/fsd0512_sum.pdf. Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. “Algorithmic Decision Making and the Cost of Fairness.” In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining , pp. 797-806. 2017. Hellman, Deborah. “Measuring Algorithmic Fairness.” Virginia Public Law and Legal Theory Research Paper, no. 2019-39 (July 2019). Kleinberg, Jon, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mul- lainathan. “Human Decisions and Machine Predictions.” The Quarterly Journal of Economics 133, no. 1 (February 2018): 237–293. https://doi. org/10.1093/qje/qjx032. National Institute of Justice. “Relationship between Race, Ethnicity, and Sen- tencing Outcomes: A Meta-Analysis of Sentencing Research.” Ojmarrh Mitchell, Doris L. MacKenzie. 208129. December, 2004. https://www. ojp.gov/pdffiles1/nij/grants/208129.pdf. Acknowledgments I would like to thank Professor Frick and Masny for teaching the seminar “The Ethics of Emerging Technologies” for which I wrote this paper. Thank you for bringing my attention to this topic and Hellman’s paper and for helping me clarify my argument. I would like to thank my dad for helping me talk through ideas and providing feedback on my first draft of this paper. Previous Next

  • Sydney Bowen

    Sydney Bowen A “Shot” Heard Around the World: The Fed made a deliberate choice to let Lehman fail. It was the right one. Sydney Bowen On the morning of September 15, 2008, the DOW Jones Industrial Average plunged more than 500 points; $700 billion in value vanished from retirement plans, government pension funds, and investment portfolios (1). This shocking market rout was provoked by the bankruptcy filing of Lehman Brothers Holding Inc., which would soon become known as “the largest, most complex, most far-reaching bankruptcy case” filed in United States history (2). Amid job loss, economic turmoil, and choruses of “what ifs,” a myriad of dangerous myths and conflicting stories emerged, each desperately seeking to rationalize the devastation of the crisis and explain why the Federal Reserve did not extend a loan to save Lehman. Some accuse the Fed of making a tragic mistake, believing that Lehman’s failure was the match that lit the conflagration of the entire Global Financial Crisis. Others disparage the Fed for bowing to the public’s political opposition towards bailouts. The Fed itself, however, adamantly maintains that they “did not have the legal authority to rescue Lehman,” an argument played in unremitting refrain in the years following the crisis. In this essay, I discuss the various dimensions of the heated debate on how and why the infamous investment bank went under. I examine the perennial question of whether regulators really had a choice in allowing Lehman to fail, an inquiry that prompts the multi-dimensional and more subjective discussion of whether regulators made the correct decision. I assert that (I) the Fed made a deliberate, practical choice to let Lehman fail and posthumously justified it with a façade of legal inability, and that (II) in the context of the already irreparably severe crisis, the fate of the future financial landscape, obligations to taxpayers, and the birth of the landmark legislation TARP, the Fed made the ‘right’ decision. I. The Fed’s Almost Rock-Solid Alibi: Legal Jargon and Section 13(3) Fed Chairman Ben Bernanke, Former Treasury Secretary Hank Paulson, and New York Fed general counsel Thomas Baxter Jr. have each argued in sworn testimony that regulators wanted to save Lehman but lacked the legal authority to do so. While their statements are not lies, they neglect to tell the entire – more incriminating – truth. In this section, I assert that Fed officials deliberately chose not to save Lehman and justified their decision after the fact with the impeccable alibi that they did not have a viable legal option. In a famous testimony, Bernanke announced, “ [T]he only way we could have saved Lehman would have been by breaking the law, and I’m not sure I’m willing to accept those consequences for the Federal Reserve and for our system of laws. I just don’t think that would be appropriate ”(3). At face value, his argument appears sound; however, the “law” alluded to here– Section 13(3) of the Federal Reserve Act–was not a hard and fast body of rules capable of being “broken,” but rather a weakly worded, vague body that encouraged “regulatory gamesmanship and undermined democratic accountability” (4). i. Section 13(3) Section 13(3) of the Federal Reserve Act gives the Fed broad power to lend to non-depository institutions “in unusual and existent circumstances” (5). It stipulates that a loan must be “secured to the satisfaction of the [lending] Reserve Bank,” limiting the amount of credit that the Fed can extend to the value of a firm’s col- lateral in an effort to shield taxpayers from potential losses (6). Yet, since the notion of “satisfactory security” has no precise contractual definition, Fed officials had ample room to exercise discretionary judgment when appraising Lehman’s assets. This initial legal freedom was further magnified by the opaqueness of the assets themselves – mortgage-backed securities, credit default swaps, and associated derivatives were newfangled financial instruments manufactured from a securitization process, complexly tranched and nearly impossible to value. Thus, the three simple words, “secured to satisfaction,” provided regulators with an asylum from their own culpability, allowing them to hide a deliberate choice inside a comfort- able perimeter of legal ambiguity. ii. Evaluations of Lehman’s Assets and “Secured to Satisfaction” The “legal authority” to save Lehman hinged upon the Fed’s conclusions on Lehman’s solvency and their evaluation of the firm’s available collateral–a task that boiled down to Lehman’s troubled and illiquid real-estate portfolio, composed primarily of mortgage-backed securities. Lehman had valued their portfolio at $50 billion, purporting a $28.4 billion surplus; however, Fed officials and potential private rescuers, skeptical of Lehman’s real-estate valuation methods, argued that there was a gaping “hole” in their balance sheet. Bank of America, a private party contemplating a Lehman buyout, maintained that the size of the hole amounted to “$66 billion” while the Fed’s task team of Goldman Sachs and Credit Suisse CEO’s determined that “tens of billions of dollars were missing” (7). Esteemed economist Lawrence Ball, who meticulously reviewed Lehman’s balance sheet, however, concluded to the contrary–there was no “hole” and Lehman was solvent when the Fed allowed it to fail. While I do not claim to know which of the various assessments was correct, the simple fact remains–the myriad of conflicting reports speak to the ultimate subjectivity of any evaluation. “Legal authority” became hitched to the value of mortgage-backed securities, and in 2008 their value had become dangerously opaque. In discussing the Fed’s actions, it is necessary to point out that the Federal Reserve has a rare ability to value assets more liberally than a comparable private party–they are able to hold distressed assets for longer and ultimately exert incredible influence over any securities’ final value as they control monetary policy. The Dissenting Statement of the FCIC report aptly reveals that Fed leaders could have simply guided their staff to “re-evaluate [Lehman’s balance sheet] in a more optimistic way to justify a secured loan;” however, they elected not to do so since such action did not align with their private, practical interests (8). The “law” could have been molded in either direction–the Fed consciously chose the direction of nonintervention just as easily as they could have chosen the opposite. iii. The Fed’s “Practical” and Deliberate Choice Section 13(3) had been invoked just five months earlier in March 2008, when the Fed extended a $29 billion loan to facilitate JP Morgan’s purchase of a differ- ent failing firm, Bear Stearns. In an effort to separate the Fed’s handling of Bear Stearns from Lehman, Bernanke admits that considerations behind each decision were both “ legal and practical ” (9). While in Bear Stearns case, practical judgement weighed in favor of intervention, in Lehman’s case, it did not: “if we lent the money to Lehman, all that would happen would be that the run [on Lehman] would succeed, because it wouldn’t be able to meet the demands, the firm would fail, and not only would we be unsuccessful, but we would [have] saddled the taxpayer with tens of billions of dollars of losses” (10). While an exhaustive display of arguments and testimonies that challenge the Fed’s claim of legal inability is cogent, perhaps the most chilling evidence lies in an unassuming and incisive question: “Since when did regulators let a lack of legal authority stop them? There was zero legal authority for the FDIC’s broad guarantee of bank holding debt. Saving Lehman would have been just one of many actions of questionable legality taken by regulators” (11). iv. Other Incriminating Facts: The Barclay’s Guarantee and Curtailed PDCF Lending An analysis of Lehman’s failure would be incomplete without discussing the Fed’s resounding lack of action during negotiations of a private rescue with Barclays, a critical moment in the crisis that could have salvaged the failing firm with- out contentious use of public money. Barclays began conversing with the U.S. Treasury Department a week prior to Lehman’s fall as they contemplated and hammered out terms of an acquisition (12). The planned buyout by the British bank would have gone through had the Fed agreed to guarantee Lehman’s trading obligations during the time between the initial deal and the final approval; yet, the Fed deliberately refused to intervene, masking their true motives behind a legal inability to offer a “‘naked guarantee’–one that would be unsecured and not limit- ed in amount” (13). However, since such a request for an uncapped guarantee never occurred, the Fed’s legal alibi is deceitfully misleading. In truth, Lehman asked for secured funding from the Fed’s Primary Dealer Credit Facility (PDCF), a liquidity window allowing all Wall Street firms to take out collateralized loans when cut off from market funding (“The Fed—Primary Dealer Credit Facility (PDCF),” n.d.). While Lehman would not have been able to post eligible collateral under the initial requirement of investment-grade securities, they likely would have been able to secure a loan under the expanded version of the program that accepted a broader range of collateral. The purposeful curtailment of the expanded collateral to Lehman is one of the most questionable aspects of the Lehman weekend, and is perhaps the most lucid evidence that the Fed made a deliberate choice to let the firm fail. The FCIC de- tails the murky circumstances and clear absence of an appropriate explanation for the act: “the government officials made it plain that they would not permit Lehman to borrow against the expanded types of collateral, as other firms could. The sentiment was clear, but the reasons were vague” (14). If there had been a rational ex- planation, regulators would have articulated it. Instead, they merely repeated that “there existed no obligation or duty to provide such information or to substantiate the basis for the decision not to aid or support Lehman” (15). The Fed’s refusal to provide PDCF liquidity administered the final nail in Lehman’s coffin–access to such a loan made the difference in Lehman being able to open for business that infamous morning. v. An Intriguing Lack of Evidence The Fed did not furnish the FCIC with any analysis to show that Lehman lacked sufficient collateral to secure a loan under 13(3), referencing only the estimates of other Wall Street firms and declining to respond to a direct request for “the dollar value of the shortfall of Lehman’s collateral relative to its liquidity needs” (16). Diverging from typical protocol, where the Fed’s office “wrote a memo about each of the [potential] loans under Section 13(3),” Lehman’s case contains no official memo. When pressed on this topic, Scott Alvarez, the General Counsel of the Board of Governors of the Federal Reserve, rationalized the opportune lack of evidence as an innocuous judgement call: “folks had a pretty good feeling for the value of Lehman during that weekend, and so there was no memo prepared that documented why it is we didn’t lend... they understood from all of [the negotiations] that there wasn’t enough there for us to lend against and so they weren’t willing to go forward” (17). While this absence of evidence does not prove that the Fed had access to a legal option, it highlights a disconcerting and suggestive vacancy in their claims. Consider an analogous courtroom case where a defendant exercises the right to remain silent rather than respond to a question that may implicate them–similarly, the Fed’s intentional evasion of the request for concrete evidence appears an incriminating insinuation of guilt. The lack of “paper trail” becomes even more confounding when coupled with the Fed’s inconsistent and haphazard statements justifying their decision. Only after the initial praise for the decision soured into a surge of public criticism did any mention of legality enter the public record. Nearly three weeks after Lehman’s fall on October 7th, Bernanke introduced a strategic “alibi:” “Neither the Treasury nor the Federal Reserve had the authority to commit public money in that way” (18). Bernanke insists that he will “maintain until [his] deathbed that [they] made every effort to save Lehman, but were just unable to do so because of a lack of legal authority” (19). However, when considering the subjectivity of “reasonable assurance” of repayment, the malleability of “legal authority,” and the convenient lack of evidence to undermine his statement, Bernanke’s “dying” claim becomes comically hollow. If the Fed had truly made “every effort” to rescue Lehman, they would have relied on more than a “pretty good feeling”–had they truly been sincere, the Federal Reserve, a team of seasoned economists, would have used hard numerical facts as guidance for a path forward. vi. The Broader Implications of “Secured to Satisfaction:” a Logical Fallacy While the Fed’s lack of transparency is unsettling, perhaps the most unnerving aspect of the entire Lehman episode is the precarious regulatory framework that the American financial system trusted during a crisis. The concept of “secured to satisfaction” is not the bullet-proof legal threshold painted by the media, rather it was a malleable moving target molded by the generosity of the Fed’s estimates and the fluctuating state of the economy, instead of precise mathematical facts. A 2018 article by Columbia Law Professor Kathryn Judge exposes the logical fallacy of Section 13(3)’s “secured to satisfaction,” citing how “subsequent developments can have a first order impact on both the value of the assets accepted as collateral and the apparent health of the firms needing support” (20). The “legal authority” of regulators to invoke Section 13(3) is a circular and empty concept, hitched to nebulous evaluations of complex and opaque securities, assets that were not only inherently hard to value but whose valuations could later be manipulated. By adjusting the composition of their balance sheet (Open Market Operations) and altering interest rates, the Fed guides the behavior of financial markets, thus subtly inflating (or deflating) the value of a firm’s collateral (21). Indeed, in the years following the government’s support of Bear Stearns and AIG, the Fed’s aggressive and novel monetary policy (close to zero interest rates and a large-scale program of quantitative easing) may have been “critical to making the collateral posted by [Bear Stearns and AIG] seem adequate to justify the central bank’s earlier actions’’ (22). Using collateral quality and solvency as prerequisites for lawful action is inherently problematic, since a firm’s health and the quality of their collateral are not factors given exogenously–they are endogenous variables that regulators them- selves play a critical role in determining. Thus, acceptance of the narrative that Lehman failed because the Fed lacked any legal authority to save it would be a naive oversight. Rather, Lehman failed because the Fed lacked the practical and political motivations to exploit the law. II. The Right Choice As Lehman’s downfall is both a politically contentious and emotionally charged topic, it is necessary to approach the morality of the Fed’s decision with sympathy and caution. In the following sections, I intend to illustrate why regulators made the right decision in allowing Lehman to fail by using non-partisan facts organized around four key arguments . (1) Lehman was not the watershed event of the Crisis. The market panic follow- ing September 2008 was a reaction to a collection of unstoppable, unrelated, and market-shaking events. (2) Lehman’s failure expunged the hazardous incentives carved into the financial landscape prior. Policymakers shrewdly chose long-term economic order over the short-term benefit of keeping a single firm afloat. (3) Failure was the “right” and only choice from a taxpayer’s perspective. (4) Lehman’s demise was a necessary catastrophe, creating circumstances so parlous that Congress passed TARP, landmark legislation that gave the Federal Reserve the authority that ultimately revived the financial system. (1) Lehman Was Not the Watershed Event of the Crisis For many people, the heated debate over whether regulators did the right thing in allowing Lehman to fail is synonymous with the larger question: “would rescuing Lehman have saved us from the Great Recession?” In the following section, I assert that Lehman was not the defining moment of the Financial Crisis (as is often construed in the media); rather, the global financial turmoil was irreversibly underway by September 2008 and the ensuing disaster could not have been simply averted by Lehman’s rescue. “ The problem was larger than a single failed bank – large, unconnected financial institutions were undercapitalized because of [similar, failed housing bets] ” (23). By Monday September 15, Bank of America had rescued the deteriorating Merrill Lynch and the insurance giant AIG was on the brink of failure–a testament to the critical detail that many other large financial institutions were also in peril due to losses on housing-related assets and a subsequent liquidity crisis. Indeed, in the weeks preceding Lehman’s failure, the interbank lending market had virtually froze, plunged into distress by a contagious spiral of self-fulfilling expectations. Unable to ascertain the location and size of subprime risk held by counterparties in the market, investors became panicked by the obscured and so ubiquitous risk of housing exposure, precipitously cutting off or restricting funding to other market participants. This perceived threat of a liquidity crisis triggered the downward spiral of the interbank lending market in the weeks preceding Lehman’s fall, a market which pumped vital cash into nearly every firm on Wall Street. The LIBOR-OIS spread, a proxy for counterparty risk and a robust indicator of the state of the interbank market, illustrates these “illiquidity waves” that severely impaired markets in 2008 (24). (Sengupta & Tam, 2008). As shown in the figure below, in the weeks prior to the failure of Lehman Brothers, the spread spiked dramatically, soaring above 300 basis points and portraying the cascade of panic and contraction of lending standards in the interbank market. The idea that Lehman was the key moment in the crisis might be accurate if nothing of significance happened before its failure; however, as I outline below this was clearly not the case. The quick succession of events occurring in September 2008 – events which would have occurred regardless of Lehman’s failure – triggered the global financial panic. A New Yorker article publishing a detailed timeline of the weekend exposes how AIG’s collapse and near failure was completely uncorrelated to Lehman (25). On Saturday September 13, AIG’s “looming multi-billion-dollar shortfall” from bad gambles on credit default swaps became apparent. Rescuing AIG became a top priority throughout the weekend, and on Tuesday, the day after Lehman filed for bankruptcy protection, the Fed granted an $85 billion emergency loan to salvage AIG’s investments (26). Given the curious timing, AIG’s troubles are often chalked up to be a market reaction to Lehman’s failure; however, proper facts expose the failures of AIG and Lehman as merely a close succession of unfortunate, yet unrelated events. In a similar light, the failure and subsequent buyouts of Washington Mutual (WaMu) and Wachovia, events that further rocked financial markets and battered confidence, would have occurred regardless of a Lehman bailout. Both commercial banks were heavily involved in subprime mortgages and were in deep trouble before Lehman. University of Oregon economist Tim Duy asserts that, even with a Lehman rescue, “the big mortgage lenders and regional banks [ie. WaMu and Wachovia] that were more directly affected by the mortgage meltdown likely wouldn’t have survived” (27). The financial system was precariously fragile by the fall of 2008 and saving Lehman would not have defused the larger crisis or ensuing market panic that erupted after September 2008. Critics of the Fed’s decision often cite how the collapse of Lehman Brothers be- gat the $62 billion Reserve Primary Fund’s “breaking of the buck” on Thursday, September 18 and precipitated a $550 billion run on money-market funds. Lehman’s dire effect on money and commercial paper markets is irrefutable; however, arguments that Lehman triggered this broader global financial panic neglect all relevant facts. The Lehman failure neither froze nor would a Lehman rescue have unfrozen credit markets, the key culprit responsible for the escalation and depth of the Crisis (28). Credit markets did not freeze in 2008 because the Fed chose not to bailout Lehman–they froze because of the mounting realization that mortgage losses were concentrated in the financial system, but nobody knew precisely where they lay. It was this creeping, inevitable realization, amplified by Lehman and the series of September events, that caused financial hysteria (29). As Geithner explains, “Lehman’s failure was a product of the forces that created the crisis, not the fundamental cause of those forces” (30). The core problems that catalyzed the financial market breakdown were an amalgamation of highly leveraged institutions, a lack of transparency, and the rapidly deteriorating value of mortgage-related assets–bailing out Lehman would not have miraculously fixed these problems. While such an analysis cannot unequivocally prove that regulators made the right decision in choosing to let Lehman fail, it offers a step in the right direction–the conventional wisdom that Lehman single-handedly triggered the collapse of confidence that froze credit markets and caused borrowing rates for banks to skyrocket is unfounded. While I have argued above that Lehman’s bankruptcy was not the sole trigger of the crisis, it was also not even the largest trigger. Research by Economist John Taylor asserts that Lehman’s bankruptcy was not the divisive event peddled by the media–using the LIBOR spread (the standard measure for market stress), Taylor found that the true ratcheting up of the crisis began on September 19, when the Fed revealed that they planned to ask Congress for $700 billion to defuse the crisis (31). Arguments advanced by mainstream media that saving Lehman would have averted the recession are naively optimistic and promote a dangerously inaccurate narrative on the events of 2007–2009. The failure of Lehman did indeed send new waves of panic through the economy; however, Lehman was not the only disturbance to rock financial markets in September of 2008 (32). This latter fact is of critical importance. (2) Lehman’s Collapse Caused Inevitable and Necessary Market Change “The inconsistency was the biggest problem. The Lehman decision abruptly and surprisingly tore the perceived rule book into pieces and tossed it out the window.” –Former Vice Chairman to the Federal Reserve Alan Blinder (33). Arguments that cite the ensuing market panic and erosion of confidence that erupted after Lehman’s failure are near-sighted and fail to appreciate the larger picture motivating policy makers’ decision. Regulators’ decision not to rescue the then fourth largest investment bank, an institution assumed “too big to fail,” dispensed a necessary wake-up call to deluded and unruly Wall Street firms, which had been lulled into a costly false sense of security. The question of whether regulators did the right thing in allowing Lehman to fail cannot be studied in a vacuum; it must be considered alongside the more consequential question of whether regulators made the right decision in saving Bear Stearns. In 2007, the Fed’s extension of a $29 billion loan to Bear Stearns rewrote the tacit rules that had governed the political and fiscal landscape for centuries, substantiating the notion that institutions could be “too big or too interconnected to fail.” The comforting assumption that regulators would intervene to save every systemically important institution from failure was a turning point in the crisis, “setting the stage for [the financial carnage] that followed” (34). After the Bear Stearns intervention, regulators faced a formidable and insuperable enemy: the inexorable march of time. It would be an unsustainable situation for the government to continue bailing out every ailing financial firm. “These officials would have eventually had to say ‘no’ to someone, sometime. The Corps of Financial Engineers drew the line at Lehman. They might have been able to let the process run a few weeks more and let the bill get bigger, but ultimately, they would have had to stop. And when they did expectations would be dashed and markets would adjust. If Lehman had been saved, someone else would have been allowed to fail. The only consequence would be the date when we commemorate the anniversary of the crisis, not that the crisis would have been forever averted. ” (35). The Lehman decision corrected the costly market expectations created by Bear Stearns’ rescue and restored efficiency and discipline to markets. Throughout the crisis, policymakers, unable to completely avoid damage, were forced to decide which parties would bear losses. Lehman’s demise was a reincarnation and emblem of their past decisions–their precedent of taxpayer burden had further encouraged Wall Street’s excessive leverage and reckless behavior (36). Saving Lehman would have simply hammered these skewed incentives further into markets, putting the long-term stability and structure of capitalist markets at risk. Taxpayers would have been forced to foot a bill regardless of the Fed’s final decision: if not directly through a bailout, then indirectly through layoffs and economic turmoil (37). Instead of saddling taxpayers with the lingering threat of a large bill in the future, the Fed made the prudent and far-sighted decision to hand them a smaller bill today. The Fed heeded the wisdom of the age-old adage, “better the devil you know than the devil you don’t.” Put simply, the economic “calculus” of policymakers was correct. While rescuing Lehman may have seemed tantalizing at the time, the long-term costs would have been far more consequential than the short-term benefits (38). Political connotations often accompany this argument, evocative of what some have christened the Fed’s “painful yet necessary lesson on moral hazard;” however, partisan beliefs are extraneous to the simple, economic facts of the matter. From a fiscal perspective, policymakers made the right choice to let Lehman fail by shrewdly choosing long-term economic order over short-term benefits. (3) The Right Decision from a Taxpayers’ Perspective Given financial markets’ complete loss of confidence in Lehman and the unnervingly fragile state of the economy, an attempt at a Lehman rescue (within or above the law) would not only have been a fruitless, but also a seriously unjust use of taxpayer dollars. The health of an investment bank hinges upon the willingness of customers and counterparties to deal with it, and according to former Secretary Geithner, “that confidence was just gone” (39). By the weekend, the market had already lost complete confidence in Lehman: “no one believed that the assets were worth their nominal value of $640 billion; a run on its assets was already underway, its liquidity was vanishing, and its stock price had fallen by 42% on just Friday September 12th; it couldn’t survive the weekend” (40). For all practical purposes, the markets had sealed Lehman’s fate and a last-minute government liquidity line could have done nothing to change it. In testimony, Bernanke aptly characterizes a loan to supplant the firms’ disappearing liquidity as a prodigal expenditure, “merely wast- ing taxpayer money for an outcome that was unlikely to change” (41). After the fallout of the Barclays deal, many experts have argued that the Fed should have provided liquidity support during a search for another buyer, since temporary liquidity assistance from the government might have extinguished the escalating crisis. However, such an open-ended government commitment that allowed Lehman to shop for an “indefinite time period” would have been an absurd waste of public money (42). If the Fed had indeed provided liquidity aid up to some generous valuation of Lehman’s collateral, “the creditors to Lehman could have cashed out 100 cents on the dollar, leaving taxpayers holding the bag for losses” (43). The loan would not have prevented failure, but only chosen which creditors would bear Lehman’s losses at the expense of others. On September 15, “Lehman [was] really nothing more than the sum of its toxic assets and shattered reputation as a venerable brokerage”(44). It would have been an egregious abuse of the democratic tax system if the government were to bail out Lehman, leaving the public at the whims of the fragile financial markets and saddling them with an uncapped bill for Wall Street’s imprudence. While virulent rumors of Lehman’s failure as political save-face by regulators may prevail in mainstream media, I maintain that the Fed’s deci- sion was the right one for the American public (45). (4) TARP: Lehman Begat the Legislation that Revived the Financial System In considering the relative importance of Lehman as the cause of the crisis, scholars must also consider the more nuanced and hard-hitting counterpart: “How important was Lehman as a cause of the end of the Crisis? ” While in the context of the suffering caused by the Great Recession and the polarizing rhetoric of “bailing out banks,” this question is politically unpopular; I broach it nonetheless, since it is an important facet of the debate on whether regulators made the “right decision.” Lehman’s failure was vitally important to the end of the Crisis–it allowed the Troubled Asset Relief Program (TARP) to pass Congress, a critical piece of legislation that equipped regulators with the tools ultimately necessary to repair the financial system (46). Every previous effort of the Fed (creating the PDCF, rescuing Bear Stearns, the conservatorship of Fannie and Freddie) was not enough to salvage the deteriorating financial system–by September 2008 “Merrill Lynch, Lehman, and AIG were all at the edge of failure, and Washington Mutual, Wachovia, Goldman Sachs, and Morgan Stanley were all approaching the abyss” (47). The Fed needed the authority to inject capital into the financial system, and as described in Naomi Klein’s The Shock Doctrine , Lehman’s unexpected fall acted as the final catastrophic spark necessary to “prompt the hasty emergency action involving the relinquishment of rights and funds that would otherwise be difficult to pry loose from the citizenry” (48). With authority to inject up to $700 billion of capital into suffering non-bank institutions, TARP preserved the crumbling financial system by inspiring them to lend again. The government offered $250 billion in capital to the nine most systemically important institutions, and used $90 billion in TARP financing to save the teetering financial giants, Bank of America and Citigroup (49). Exactly how much credit TARP deserves for averting financial catastrophe is unclear, yet the fact remains that coupled with Geithner’s Stress Tests, TARP helped stop the county’s spiral into what could have been a crisis as dire as the Great Depression. IV. Conclusion In this essay, I have shown that the Fed exploited the vagueness of Section 13(3) to ad- vance their political, economic, and moral agenda to let Lehman fail, and asserted that policymakers made the right choice in allowing Lehman to fail (weighing economic facts, the implications of future economic landscape, taxpayers’ rights, and the passage of land- mark legislation). It may have been easier for regulators to hide behind legal jargon and technicalities than to defend the economic rationale and practicality of their onerous decision to an audience of distressed Americans; however, this ease is not without the costs of continued confusion, misleading conventional wisdom, and bitter citizenry. Lehman’s bankruptcy will forever be synonymous with the financial crisis and (resulting) wealth destruction.” -Paul Hickey, founder of Bespoke Investment Group (50). Lehman’s failure left an indelible mark in history and a tireless refrain of diverging and potent emotions towards regulators: contempt for the Fed that “triggered the Crisis,” disdain for the government that bailed out Wall Street with TARP, and hatred of impressionable leaders who “bowed” to political pressure. It is indeed easier to accept a visceral and tangible moment like Lehman’s failure as a cause of suffering than the nihilistic and elusive fact that the buildup of leverage and the burst of the housing bubble caused the crisis. However, it is not enough for only academics and policymakers to understand that “Lehman’s failure was a product of the forces that created the crisis, not a fundamental cause of those forces” (51). Conventional wisdom must be rewritten for the sake of faith in the government and the prevention of future crises. Our acceptance of why Lehman was allowed to die must move beyond the apportioning of responsibility or the distribution of reparations–we must redirect the futile obsession over the legality and morality of the Fed’s decision towards the imbalances in the financial system that caused the Crisis to begin with. Endnotes 1 Public Affairs, The Financial Crisis Inquiry Report, 340. 2 Ibid. 3 Clark, “Lehman Brothers Rescue Would Have Been Unlawful, Insists Bernanke.” 4 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 5 Fettig, The History of a Powerful Paragraph. 6 Ball, The Fed and Lehman Brothers, 5. 7 Stewart, Eight Days. 8 Public Affairs, The Financial Crisis Inquiry Report, 435. 9 Public Affairs, The Financial Crisis Inquiry Report, 340. 10 Ibid. 11 Calabria, “Letting Lehman Fail was a Choice, and It Was the Right One.” 12 Chu, “Barclays Ends Talks to Buy Lehman Brothers.” 13 Ball, The Fed and Lehman Brothers. 14 Public Affairs, The Financial Crisis Inquiry Report, 337. 15 Ball, The Fed and Lehman Brothers, 141. 16 Ibid, 11. 17 Ibid, 133. 18 J.B. Stewart and Eavis, “Revisiting the Lehman Brothers Bailout that Never Was.” 19 Ibid. 20 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 21 Tarhan, “Does the federal reserve affect asset prices? 22 Judge, “Lehman Brothers: How Good Policy Can Make Bad Law.” 23 Public Affairs, The Financial Crisis Inquiry Report, 433. 24 Sengupta & Tam. 25 J.B. Stewart, “Eight Days.” 26 Public Affairs, The Financial Crisis Inquiry Report, 435. 27 O’Brien, “Would saving Lehman have saved us from the Great Recession?” 28 Ibid. 29 Public Affairs, The Financial Crisis Inquiry Report, 436. 30 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 31 Skeel, “History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong.” 32 Public Affairs, The Financial Crisis Inquiry Report, 436. 33 J.B. Stewart and Eavis, “Revisiting the Lehman Brothers Bailout that Never Was.” 34 Skeel, “History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong.” 35 Reinhart, “A Year of Living Dangerously: The Management of the Financial Crisis in 2008.” 36 Ibid. 37 Antoncic, “Opinion | Lehman Failed for Good Reasons.” 38 Reinhart, “A Year of Living Dangerously: The Management of the Financial Crisis in 2008.” 39 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 40 J.B. Stewart, “Eight Days.” 41 Public Affairs, The Financial Crisis Inquiry Report, 435. 42 Ibid. 43 Ibid. 44 Grunwald, “The Truth About the Wall Street Bailouts.” 45 Erman, “Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll.” 46 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. 47 Ibid. 48 Erman, “Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll.” 49 J.B. Stewart, “Eight Days.” 50 Straders, “The Lehman Brothers Collapse and How It’s Changed the Economy Today.” 51 Geithner & Metrick, Ten Years after the Financial Crisis: A Conversation with Timothy Geithner. Bibliography Antoncic, M. (2018, September). Opinion | Lehman Failed for Good Reasons. The New York Times . Retrieved from https://www.nytimes.com/2018/09/17/ opinion/lehman-brothers- financial-crisis.html Ball, L. (2016). THE FED AND LEHMAN BROTHERS . 218. Calabria, M. (2014). Letting Lehman Fail Was a Choice, and It Was the Right One | Cato Institute. Retrieved December 7, 2019, from https://www. cato.org/publications/commentary/letting-lehman-fail-was-choice-it-was- right-one Chu, Kathy. 2008. “Barclays Ends Talks to Buy Lehman Brothers.” ABC News . Retrieved January 3, 2021, from https://abcnews.go.com/Business/sto- ry?id=5800790&page=1 Clark, Andrew. 2010. “Lehman Brothers Rescue Would Have Been Unlaw- ful, Insists Bernanke.” The Guardian . Retrieved January 1, 2021 (http:// www.theguardian.com/business/2010/sep/02/lehman-bailout-unlaw- ful-says-bernanke). Erman, M. (2013, September 15). Five years after Lehman, Americans still angry at Wall Street: Reuters/Ipsos poll. Reuters . Retrieved from https://www. reuters.com/article/us-wallstreet- crisis-idUSBRE98E06Q20130915 Fettig, D. (2008, June). The History of a Powerful Paragraph | Federal Reserve Bank of Minneapolis . https://www.minneapolisfed.org:443/article/2008/the-histo- ry-of-a- powerful-paragraph Geithner, T., & Metrick, A. (2018). Ten Years after the Financial Crisis: A Conver- sation with Timothy Geithner . Retrieved from https://www.ssrn.com/ab- stract=3246017 Grunwald, M. (2014, September). The Truth About the Wall Street Bailouts | Time. Retrieved December 7, 2019, from https://time.com/3450110/ aig-lehman/ Kathryn Judge. (2018, September 11). Lehman Brothers: How Good Policy Can Make Bad Law. Retrieved December 3, 2019, from CLS Blue Sky Blog website: http://clsbluesky.law.columbia.edu/2018/09/11/lehman-brothers- how-good-policy-can-make-bad-law/ O’Brien, M. (2018, September). Would saving Lehman have saved us from the Great Recession? - The Washington Post. Retrieved December 4, 2019, from https://www.washingtonpost.com/business/2018/09/20/would-sav- ing-lehman-have- saved-us-great-recession/ Reinhart, V. (2011). A Year of Living Dangerously: The Management of the Fi- nancial Crisis in 2008. Journal of Economic Perspectives , 25 (1), 71–90. Re- trieved from https://doi.org/10.1257/jep.25.1.71 Skeel, D. (2018, September 20). History credits Lehman Brothers’ collapse for the 2008 financial crisis. Here’s why that narrative is wrong. Retrieved November 17, 2019, from Brookings website: https://www.brookings.edu/ research/history-credits-lehman-brothers-collapse-for- the-2008-financial- crisis-heres-why-that-narrative-is-wrong/ Spector, S. C. and M. (2010, March 13). Repos Played a Key Role in Lehman’s Demise. Wall Street Journal . Retrieved from https://www.wsj.com/articles/ SB10001424052748703447104575118150651790066 Sraders, A. (2018). The Lehman Brothers Collapse and How It’s Changed the Economy Today. Retrieved December 9, 2019, from Stock Market—Busi- ness News, Market Data, Stock Analysis—TheStreet website: https://www. thestreet.com/markets/lehman-brothers- collapse-14703153 Stewart, J.B. (2009, September). Eight Days | The New Yorker. Retrieved De- cember 7, 2019, from https://www.newyorker.com/magazine/2009/09/21/ eight-days Stewart, J. B., & Eavis, P. (2014, September 29). Revisiting the Lehman Brothers Bailout That Never Was. The New York Times . Retrieved from https://www. nytimes.com/2014/09/30/business/revisiting-the-lehman-brothers-bail- out-that-never- was.html Tarhan, V. (1995). Does the federal reserve affect asset prices? Journal of Econom- ic Dynamics and Control , 19 (5), 1199–1222. Retrieved from https://doi. org/10.1016/0165-1889(94)00824- 2 The Fed—Primary Dealer Credit Facility (PDCF). (n.d.). Retrieved December 5, 2019, from https://www.federalreserve.gov/regreform/reform-pdcf.htm The Financial Crisis Inquiry Report . (2011). PublicAffairs. Previous Next

  • Ticketmaster | brownjppe

    Rewriting the Antitrust Setlist: Examining the Live Nation-Ticketmaster Lawsuit and its Implications for Modern Antitrust Law Katya Tolunsky Author Malcolm Furman Arjun Ray Editors I. Introduction On November 15, 2022, the music industry witnessed an unprecedented event that would become a turning point in discussions about ticketing practices and market dominance. Millions of devoted Taylor Swift fans were devastated when they failed to secure tickets for the highly anticipated Eras Tour. The ticket release sparked chaos, with fans enduring hours–even days–on Ticketmaster’s website, battling extended delays, technical glitches, and unpredictable price fluctuations. Despite their unwavering persistence, many “Swifties” were left empty-handed. This high-profile debacle ignited a firestorm of criticism from politicians and consumers alike, who questioned Ticketmaster’s apparent lack of preparedness for the overwhelming demand. While not an isolated incident of consumer dissatisfaction, the scale of this event and the passionate outcry from Swift’s fan base catapulted long-standing issues with ticket availability, pricing, and fees into the national spotlight. The “Swift ticket fiasco” became a catalyst for broader scrutiny of Ticketmaster’s business practices. Lawmakers and consumer advocacy groups called for investigations into the company’s business model, while accusations circulated about Ticketmaster leveraging its market power to stifle competition and maintain high fees. This perfect storm of events set the stage for a renewed examination of antitrust concerns in the live entertainment industry, bringing the anticompetitive practices of Live Nation-Ticketmaster into the public political and legal spotlight. On May 23, 2024, the U.S. Department of Justice (DOJ) filed a civil antitrust lawsuit against Live Nation Entertainment (the merged company) for allegedly violating the terms of a 2010 settlement, which required Ticketmaster to license its software to competitors and prohibited Live Nation from retaliating against venues that use competing ticketing services, and engaging in anticompetitive practices. The DOJ’s complaint argues that Live Nation has used its control over concert venues and artists to pressure venues into using Ticketmaster and to punish those that don’t, effectively excluding rival ticketing services from the market. the DOJ is suing Live Nation-Ticketmaster for violating Section 2 of the Sherman Antitrust Act and monopolizing markets across the live concert industry. This suit raises important questions about the application of the Sherman Act and the evolving approach to antitrust enforcement in the United States. At the heart of this case lies a fundamental clash between two competing philosophies of antitrust enforcement. For decades, the Chicago School approach has dominated American antitrust law, focusing narrowly on consumer welfare through the lens of prices and economic efficiency. However, a new perspective has emerged to challenge this framework. The “New Brandeis” movement, named after Supreme Court Justice Louis Brandeis and championed by current FTC Chair Lina Khan, advocates for a broader understanding of competition law that considers market structure, concentration of economic power, and impacts on democracy—not just consumer prices. As this movement antitrust movement gains prominence and momentum, the Live Nation-Ticketmaster case represents a critical test for the application of Section 2 of the Sherman Act in the digital age. The outcome of this case will set important precedents for how antitrust law is applied to companies that dominate multiple interconnected markets. This paper seeks to analyze the evolution of antitrust law in the context of this Live Nation-Ticketmaster lawsuit. First, this paper details the 2010 LiveNation/Ticketmaster merger, the extensive criticism of this merger, and the terms of the merger. Second, this paper delves into the relevant history of the Sherman Antitrust Act and the evolution and enforcement of antitrust and monopoly law in the last one hundred years. Additionally, to illustrate the scope of anticompetitive behavior and ways in which past antitrust cases have been prosecuted, the paper examines several notable cases concerning Section 2 of the Sherman Act. Third, this paper explores the recent shift in approach, characterized by the New Brandeis movement, to antitrust law and the broader debate surrounding the purpose and scope of antitrust enforcement. Lastly, this paper seeks to situate the Live Nation-Ticketmaster lawsuit in the context of this debate and analyze the implications and potential outcomes of this suit. Ultimately, this paper seeks to show that the DOJ’s original approval of the Live Nation-Ticketmaster merger in 2010 with behavioral remedies was inadequate in preventing anticompetitive practices and protecting consumer interests, and that structural remedies (such as breaking up the company) are necessary to restore effective competition in the live entertainment industry. The Live Nation-Ticketmaster merger in 2010 and its subsequent negative impact on consumers and the live entertainment industry serve as an excellent example to illustrate the insufficient nature of the traditional consumer welfare-focused antitrust enforcement in addressing the complexities of modern markets, particularly in industries like live entertainment where vertical integration can lead to subtle forms of anticompetitive behavior. By examining how Live Nation's market power is reinforced through its data advantages and “flywheel” business model, this paper demonstrates why traditional antitrust frameworks struggle to address such modern competitive dynamics. Ultimately, this paper argues that the Live Nation-Ticketmaster case demonstrates the need for a broader interpretation and more aggressive enforcement approach of antitrust law, aligning with the New Brandeis approach. II. The Live Nation-Ticketmaster Merger: Antitrust Considerations and Regulatory Response In 2010, Live Nation, the world’s largest concert promoter, merged with Ticketmaster, the world’s dominant ticketing platform. At the time of the merger, Ticketmaster held an effective monopoly in the ticket sales market, with an estimated 80% market share for concerts in large venues. In 2008, Live Nation launched its own ticketing platform, positioning itself as a rival to Ticketmaster by offering competitive pricing, leveraging its existing relationships with venues and artists, and promising to reduce service fees. This direct competition in ticketing, combined with Live Nation's dominant position in concert promotion, posed a significant threat to Ticketmaster's monopoly, which the merger would eliminate. Critics argued that the merger would lead to higher ticket prices, reduced competition, and a worse experience for consumers. In his 2009 testimony before the Senate Committee on the Judiciary, Subcommittee on Antitrust, Competition Policy and Consumer Rights, Senior Fellow for the American Progress Action Fund David Balto said, “Eliminating a nascent competitor by acquisition raises the most serious antitrust concerns…By acquiring Ticketmaster, Live Nation will cut off the air supply for any future rival to challenge its monopoly in the ticket distribution market.” Despite this widespread criticism of the proposed merger and its potential consequences, the DOJ approved the merger. However, the DOJ still recognized the potential threats and consumer criticism of the merger. In response to these concerns, the DOJ referred to the limits of antitrust enforcement, noting that the DOJ’s role is to prevent anticompetitive harms from mergers, not to remake industries or address all consumer complaints. In a speech delivered on March 18th, 2010, titled “The Ticketmaster/Live Nation Merger Review and Consent Decree in Perspective,” Assistant Attorney General for the Antitrust Division Christine A Varney said: “Our concern is with competitive market structure, so our job is to prevent the anticompetitive harms that a merger presents. That is a limited role: whatever we might want a particular market to look like, a merger does not provide us an open invitation to remake an industry or a firm’s business model to make it more consumer friendly…In the course of investigating this merger, we heard many complaints about trends in the live music industry, and many complaints from consumers about Ticketmaster. I understand that people view Ticketmaster’s charges, and perhaps all ticketing fees in general, as unfair, too high, inescapable, and confusing. We heard that it is impossible to understand the litany of fees and why those fees have proliferated. I also understand that consolidation has been going on in the industry for some time and the resultant economic pressures facing local management companies and promoters. Those are meaningful concerns, but many of them are not antitrust concerns. If they come from a lack of effective competition, then we hope to treat them as symptoms as we seek to cure the underlying disease. Where such issues concern consumer fairness, however, they are better addressed by other federal agencies.” Varney’s statement delineates a narrow view of the DOJ's role in merger review, focusing primarily on preventing specific antitrust violations rather than addressing broader consumer concerns or industry trends. This approach suggests that the DOJ saw its mandate as limited to addressing anticompetitive harms directly related to the merger, rather than using the merger review process to address wider industry problems or consumer dissatisfaction that fall outside the scope of antitrust law. The merger itself included both horizontal (direct competitors merging) and vertical (different levels of supply chain merging) integration concerns. The DOJ approved the merger with certain conditions: Ticketmaster had to sell Paciolan (its self-ticketing company), Ticketmaster had to license its software to Anschutz Entertainment Group (AEG), and most importantly, LiveNation was prohibited from retaliating against venues that use competing ticketing services. In the merger settlement, the DOJ stated that they would monitor compliance with the agreement for ten years and establish an Order Compliance Committee to receive reports of concerning behavior from industry players. The DOJ also emphasized the importance of industry participation in monitoring and reporting potential violations of the agreement or antitrust laws. These conditions were intended to address the most immediate competitive concerns raised by the merger. Thus, the DOJ primarily relied on behavioral remedies rather than structural changes, an approach that would later be criticized as insufficient to prevent anticompetitive practices. Structural changes, in contrast, could have involved more drastic measures such as requiring the divestiture of certain business units, breaking up the merged entity into separate companies, or imposing limitations on the company's ability to operate in multiple segments of the live entertainment industry. These types of structural remedies aim to fundamentally alter the company's market position and capabilities, rather than merely regulating its behavior. In addition, the reliance on industry self-reporting and time-limited monitoring also raised questions about the long-term effectiveness of these measures. In retrospect, the DOJ’s approach to the Live Nation-Ticketmaster merger exemplifies the limitations of traditional antitrust enforcement in addressing complex, vertically integrated industries. By focusing on narrow, immediate competitive effects and relying heavily on behavioral remedies, the DOJ underestimated the long-term impact of the merger on market dynamics in the live entertainment industry. This case would later become a touchstone in debates about the adequacy of existing antitrust frameworks and the need for more comprehensive approaches to merger review and enforcement. III. The Sherman Act and the Evolution of Antitrust Jurisprudence The Sherman Antitrust Act, passed in 1890, was a landmark piece of legislation that emerged from the economic and political turmoil of the late 19th century’s Gilded Age. This era saw rapid industrialization and the rise of powerful trusts and monopolies that dominated key industries such as oil, steel, and railroads. These business entities, through their immense economic power, were able to stifle competition, manipulate prices, and exert immense influence on the political process. Public outcry against these practices grew, with farmers, small business owners, and laborers demanding government action to curb corporate excess. In response to these concerns, the Sherman Act became the first federal legislation to outlaw monopolistic business practices, particularly by prohibiting trusts. A trust in this context was an arrangement by which stockholders in several companies would transfer their shares to a single set of trustees, receiving in exchange a certificate entitling them to a specified share of the consolidated earnings of the jointly managed companies. This structure allowed for the concentration of economic power that the Act sought to prevent. The Sherman Act outlawed all contracts and conspiracies that unreasonably restrained interstate and foreign trade. Its authors believed that an efficient free market system was only possible with robust competition. While the Act targeted trusts, it also addressed monopolies – markets where a single company controls an entire industry. While the Sherman Act broadly addresses anticompetitive practices, Section 2 is particularly relevant to analyze the Live Nation-Ticketmaster case as it directly pertains to monopolization. Section 2 of the Sherman Act specifically prohibits monopolization, attempted monopolization, and conspiracies to monopolize. Essentially, it outlaws the acquisition or maintenance of monopoly power through unfair practices. However, it’s important to note that the purpose of Section 2 is not to eliminate monopolies entirely, but rather to promote a market-based economy and preserve competition. This nuanced approach taken by Section 2 recognizes that some monopolies may arise from superior business acumen or innovation, and only seeks to prevent those achieved or maintained through anticompetitive means. The Sherman Act laid the foundation for antitrust law in the United States, reflecting a societal commitment to maintaining competitive markets and limiting the concentration of economic power. Its passage marked a significant shift in the government’s role in regulating business practices and shaping the economic landscape. While the Sherman Act laid the groundwork for antitrust law in the United States, it was supplemented by two important pieces of legislation in 1914: the Clayton Antitrust Act and the Federal Trade Commission Act. The Clayton Act expanded on the Sherman Act by prohibiting specific anticompetitive practices such as price discrimination, exclusive dealing contracts, tying arrangements, and mergers that substantially lessen competition. The Federal Trade Commission Act created the Federal Trade Commission (FTC) as an independent regulatory agency to prevent unfair methods of competition and deceptive acts or practices in commerce. Together, these Acts addressed some of the Sherman Act’s limitations and provided more specific guidelines for antitrust enforcement, further solidifying the government’s commitment to maintaining competitive markets. The distinction between the Clayton Act and Sherman Act is particularly relevant to understanding the Live Nation-Ticketmaster case. Section 7 of the Clayton Act governs merger review, requiring pre-emptive intervention to prevent mergers that may substantially lessen competition. In contrast, Section 2 of the Sherman Act addresses anticompetitive conduct by existing monopolists. The 2010 Live Nation-Ticketmaster merger was reviewed under Clayton Act Section 7’s forward-looking standard, while the 2024 case challenges ongoing anticompetitive conduct under Sherman Act Section 2. This dual application of antitrust law to the same company highlights the complementary yet distinct roles of merger review and monopolization enforcement. The early enforcement and interpretation of the Sherman Act were shaped by landmark cases that helped define the scope and application of antitrust law. In Standard Oil Co. of New Jersey v. United States (1911), the Supreme Court established the “rule of reason” approach to analyzing antitrust violations. This case resulted in the breakup of Standard Oil, demonstrating the Act’s power to dismantle monopolies. The Court held that only “unreasonable” restraints of trade were prohibited, introducing a more limited interpretation of the Act. The “rule of reason” approach meant that the Court would consider the specific facts and circumstances of each case to determine whether a particular restraint of trade was unreasonable. The case also established that the Sherman Act should be interpreted in light of its broad policy goals rather than strictly construed. This approach had a significant impact on future antitrust enforcement. It allowed for a more flexible and adaptive application of the Act, enabling courts and regulators to address new forms of anticompetitive behavior as markets evolved. This interpretive framework empowered enforcers to look beyond the literal text of the Act and consider the overarching aims of promoting competition and protecting consumer welfare. As a result, antitrust enforcement could more effectively respond to changing economic conditions and business practices, particularly as industries became more complex and interconnected in the 20th century. Later, in United States v. Alcoa (1945), the Court of Appeals for the Second Circuit further refined the interpretation of the Sherman Act. Judge Learned Hand’s opinion clarified that merely possessing monopoly power is not illegal; rather, the Act prohibits the deliberate acquisition or maintenance of that power through exclusionary practices. Alcoa thus established an important distinction between achieving monopoly through superior skill, foresight, and industry, which is lawful, and maintaining it through anticompetitive conduct, which violates the Act. These cases illustrate the evolving understanding of the Sherman Act, moving from a strict interpretation to a more nuanced approach that considered market dynamics and the effects of business practices on competition. The mid-20th century saw a significant shift in antitrust enforcement characterized by a structural approach that focused on market concentration and firm size. This era, roughly spanning from the late 1930s to the early 1960s, was characterized by a prevailing view among federal antitrust authorities, economists, and policymakers that high market concentration was inherently harmful to competition. The passage of the Celler-Kefauver Act in 1950, which strengthened merger control, exemplified this approach. Influenced by economists from the Harvard School of industrial organization, particularly Joe Bain, antitrust authorities presumed that market structure determined conduct and performance. This “structure-conduct-performance” paradigm, central to the Harvard School's approach, posited that industry structure (like concentration levels) directly influenced firm behavior and market outcomes. This led to aggressive enforcement actions, including the breakup of large firms and the blocking of mergers that would have significantly increased market concentration. However, by the mid-1960s, antitrust thinking began to evolve, considering both market structure and firm conduct. This shift was reflected in the landmark 1966 Supreme Court case United States v. Grinnell Corp. , which established the modern two-part test for monopolization. The Grinnell test requires proof of both “the possession of monopoly power in the relevant market” and “the willful acquisition or maintenance of that power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.” This test, while still considering market power, introduced a focus on how that power was obtained or maintained. While the earlier era did consider power acquisition to some extent, the Grinnell test formalized and emphasized this aspect. It required a more comprehensive examination of a firm’s conduct and its effects on competition, moving beyond the primarily structural approach that often presumed anticompetitive effects from high market concentration alone. The Grinnell test has since been widely applied in monopolization cases under Section 2 of the Sherman Act, reflecting a more nuanced approach that aims to preserve competition without necessarily eliminating all monopolies. This evolution in antitrust enforcement demonstrates a move towards balancing concerns about market structure with considerations of firm conduct and efficiency. However, this balanced approach would soon give way to a more dramatic shift in antitrust philosophy that prioritized economic efficiency above other considerations. During the 1970s and 1980s, the Chicago School of Economics profoundly influenced the trajectory and scope of antitrust law and policy in the United States. This approach, led by economists and legal scholars such as Robert Bork, Richard Posner, and George Stigler, represented a significant shift in antitrust thinking. The Chicago School advocated for the “consumer welfare” standard as the primary goal of antitrust policy. This approach focused on economic efficiency and lower prices for consumers, rather than protecting competitors or maintaining a particular market structure. They argued that many practices previously considered anticompetitive could actually benefit consumers through increased efficiency. For example, Chicago School theorists argued that many mergers, even those that increased market concentration, could lead to efficiencies that benefit consumers. These efficiencies could manifest in several ways: through economies of scale that reduce production costs and potentially lower prices; through improved resource allocation that enhances product quality or variety; or through increased innovation. The Chicago School contended that these efficiency gains could outweigh potential negative effects of increased market concentration, ultimately resulting in net benefits for consumers in the form of lower prices, better products, or increased innovation. This led to a more lenient approach to DOJ merger review, with a higher bar for proving that a merger would harm competition. Vertical mergers (between companies at different levels of the supply chain) were viewed particularly favorably, as they were seen as potentially efficiency-enhancing. The Chicago School was skeptical of claims that vertical integration or vertical restraints (like exclusive dealing arrangements) were inherently anticompetitive. They argued that these practices often had pro-competitive justifications and should be judged based on their economic effects rather than per se rules. The Chicago School was driven by a strong belief in the self-correcting nature of markets. This thinking greatly influenced antitrust enforcement agencies and courts during the Reagan administration and beyond. It led to a significant reduction in antitrust enforcement actions and a higher bar for proving anticompetitive harm. This shift represented a move away from the structural approach of the mid-20th century towards a more economics-focused, effects-based analysis of competitive harm. Antitrust attorney William Markham offers a scathing critique of the consumer welfare standard’s impact on antitrust enforcement. He argues that since the late 1970s, courts have adopted increasingly restrictive antitrust doctrines based on this standard, which he views as misnamed and harmful to consumers. Markham contends that these doctrines have allowed various forms of monopolistic and anticompetitive practices to flourish unchecked. He states that the standard permits such practices “so long as the offenders take care not to charge prices that are demonstrably and provably supracompetitive.” This critique highlights how the narrow focus on consumer prices under the consumer welfare standard may overlook other forms of competitive harm. It’s important to understand this context when examining more recent developments and debates in antitrust law, including the challenges posed by digital markets and the arguments of the New Brandeis movement. IV. Judicial Interpretation of Section 2: Key Cases and Anticompetitive Practices To better understand how Section 2 of the Sherman Act has been applied in practice, it’s important to examine key antitrust cases that have shaped its interpretation and enforcement. These cases not only illustrate various types of anti-competitive practices but also demonstrate the evolution of antitrust thinking, particularly the rising influence of the Chicago School’s consumer welfare standard and subsequent challenges to this approach. Anticompetitive practices can take many forms, including refusals to deal, predatory pricing, tying, and exclusive dealing arrangements. Their legality often depends on specific facts, market conditions, and the prevailing economic theories of the time. This section examines several landmark cases that highlight these practices and trace the trajectory of antitrust law from the mid-1980s through the early 2000s, a period marked by significant shifts in antitrust philosophy and enforcement approaches. The 1985 Supreme Court case Aspen Skiing Co. v. Aspen Highlands Skiing Corp. marked a significant development in antitrust law’s approach to refusal to deal practices, a type of anticompetitive behavior where a firm with market power declines to do business with a competitor. The case involved Aspen Skiing Company, which owned three of four ski areas in Aspen, CO, discontinuing a long-standing joint lift ticket program with Aspen Highlands, the owner of the fourth area. While the Chicago School approachgenerally viewed refusals to deal as permissible, the Court in this case took a different stance. It ruled that this refusal to continue a voluntary cooperative venture could violate Section 2 of the Sherman Act, as it lacked any normal business justification and appeared designed to eliminate competition. This decision, occurring early in the ascendancy of the Chicago School, demonstrated a willingness to consider factors beyond short-term consumer welfare in antitrust analysis. Justice Stevens’ opinion emphasized the importance of intent in determining whether conduct is “exclusionary,” “anticompetitive,” or “predatory,” introducing a more contextualized approach to assessing market behavior. While not fully embracing the consumer welfare standard, the Court did consider the impact on consumers, noting that the joint ticket was popular and its elimination inconvenienced skiers. This case thus represents a crucial step in the evolution of antitrust law, bridging the gap between earlier, more aggressive interpretations of the Sherman Act and the more economics-focused analyses that would follow. It expanded the scope of antitrust enforcement by establishing that, in some cases, even a unilateral refusal to deal could be considered anticompetitive. Aspen Skiing set the stage for later cases dealing with complex market dynamics, particularly in industries where control over key resources or platforms can significantly impact competition – a concept that becomes increasingly relevant in the digital age and in cases like the Live Nation-Ticketmaster merger. As antitrust thinking continued to evolve, the influence of the Chicago School became more pronounced, as evidenced in subsequent landmark cases. This shift was reinforced by changes in the Supreme Court’s composition during the 1970s and 1980s, with appointments by Presidents Nixon and Reagan bringing more conservative justices to the bench who were often sympathetic to Chicago School economic theories. This changing court composition, coupled with the growing academic influence of the Chicago School, contributed to the changes in antitrust jurisprudence. The 1993 Supreme Court case Brooke Group Ltd. v. Brown & Williamson Tobacco Corp. marked a significant move in the treatment of predatory pricing claims, reflecting the growing dominance of the Chicago School’s consumer welfare standard. Predatory pricing occurs when a firm prices its products below cost with the intention of driving competitors out of the market, allowing the predator to later raise prices and recoup its losses. In this case, the Brooke Group accused Brown & Williamson of predatory pricing in the generic cigarette market. The Court established a two-pronged test for predatory pricing: (1) the plaintiff must prove that the prices are below an appropriate measure of cost, and (2) the plaintiff must demonstrate that the predator had a “reasonable prospect” of recouping its losses. This stringent standard, making predatory pricing claims extremely difficult to prove, clearly reflects the Chicago School’s skepticism towards such claims against firms. The Court’s reasoning prioritized short-term consumer benefits (lower prices) over long-term competitive concerns, embodying the consumer welfare standard. Justice Kennedy’s majority opinion explicitly cited Chicago School scholars, demonstrating how economic theory had come to dominate antitrust jurisprudence. This case illustrates how the Chicago School approach narrowed the scope of antitrust enforcement, potentially allowing some anticompetitive practices to escape scrutiny if they resulted in short-term consumer benefits. In the context of cases like Live Nation-Ticketmaster, this ruling underscores the challenges in proving anticompetitive behavior when short-term consumer benefits are present. The rise of the digital economy in the late 1990s and early 2000s presented new challenges to antitrust enforcement, leading to a reconsideration of established doctrines. While the Chicago School’s influence remained strong, the emergence of new technologies and business models began to test the limits of its consumer welfare-focused approach. The United States v. Microsoft Corp. (2001) case marked a pivotal moment in antitrust law’s application to the emerging digital economy, introducing new considerations for tying and monopoly maintenance in software markets. Tying occurs when a company requires customers who purchase one product to also purchase a separate product, potentially leveraging dominance in one market to gain advantage in another. The U.S. government accused Microsoft of illegally maintaining its monopoly in the PC operating systems market by tying its Internet Explorer browser to the Windows operating system and engaging in exclusionary contracts with PC manufacturers and Internet service providers. This case challenged the Chicago School's typically permissive view of tying arrangements, which often saw them as enhancing efficiency from a consumer welfare standpoint. The Court of Appeals for the D.C. Circuit ruled that Microsoft had violated Section 2 of the Sherman Act, finding that Microsoft’s practices, in aggregate, served to maintain its monopoly power by stifling competition from potential disruptors like Netscape’s browser and Sun’s Java technologies. While the court’s analysis still employed the consumer welfare standard, it showed a willingness to consider a broader range of anticompetitive effects, including harm to innovation and potential future competition. This approach reflected a nuanced evolution of antitrust thinking, acknowledging the unique characteristics of software markets and the rapid pace of technological change. Microsoft set important precedents for how antitrust law could be applied to fast-moving technology markets and platform economies, influencing later cases involving tech giants and potentially informing the analysis of platform-based businesses like Live Nation-Ticketmaster. It demonstrated that even in the era of Chicago School dominance, courts could adapt antitrust principles to address new forms of market power in the digital age. The resulting settlement, which imposed behavioral remedies rather than structural ones, sparked ongoing debates about the adequacy of traditional antitrust tools in addressing the unique characteristics of digital markets. Despite the more comprehensive and context-specific approach in Microsoft , the influence of the Chicago School remained strong, as demonstrated in the next significant case. Verizon Communications Inc. v. Law Offices of Curtis V. Trinko, LLP (2004) significantly narrowed the scope of antitrust liability for refusal to deal, revisiting and limiting the principles established in Aspen Skiing . In this case, Trinko, a law firm and Verizon customer, alleged that Verizon had violated Section 2 of the Sherman Act by providing insufficient assistance to new competitors in the local telephone service market, as required by the 1996 Telecommunications Act. The Court, in a unanimous decision authored by Justice Antonin Scalia, ruled in favor of Verizon, significantly limiting the circumstances under which a refusal to deal could violate antitrust law. Unlike in Aspen Skiing , where there was a history of voluntary cooperation, the Court emphasized that firms, even monopolists, generally have no duty to assist competitors. This ruling clearly reflects the Chicago School’s skepticism towards government intervention in markets and its focus on efficiency over other competitive concerns. The Court emphasized the importance of allowing firms to freely choose their business partners, arguing that forced cooperation could reduce companies’ incentives to invest and innovate. This aligns with the Chicago School’s concern about “false positives” in antitrust enforcement – the idea that overly aggressive antitrust action might mistakenly punish pro-competitive behavior, potentially discouraging beneficial business practices. By setting a high bar for refusal to deal claims, the Trinko decision further constrained the reach of antitrust law, potentially allowing monopolists more leeway in their dealings with competitors. By setting a high bar for refusal to deal claims, the Trinko decision further constrained the reach of antitrust law, potentially allowing monopolists more leeway in their dealings with competitors. This legal environment, which emphasized a narrow interpretation of anticompetitive behavior, set the stage for future mergers that consolidated market power across related industries. The 2010 approval of the Live Nation-Ticketmaster merger is a prime example of how this permissive approach to antitrust enforcement allowed for the creation of a vertically integrated entity with unprecedented control over the live entertainment industry. This case exemplifies how the Chicago School approach may have inadvertently created blind spots in antitrust enforcement, particularly regarding the long-term effects of monopoly power on innovation and competition. These cases collectively demonstrate the complex evolution of Section 2 application across various industries and business practices. From the nuanced approach in Aspen Skiing , through the height of Chicago School influence in Brooke Group and Trinko , to the adaptation to new technological challenges in Microsoft , they illustrate how antitrust law has grappled with changing economic theories and market realities. The cases show a clear trajectory of increasing influence of the Chicago School’s consumer welfare standard, but also reveal moments of resistance or adaptation to this approach when confronted with novel market dynamics. The Microsoft case, in particular, marks a significant point in this evolution, demonstrating how courts began to recognize the unique challenges posed by the digital economy. By examining these cases, it is possible to trace how the interpretation and application of Section 2 of the Sherman Act has shifted over time, reflecting changing economic theories and market realities. This evolution provides crucial context for understanding current debates about antitrust enforcement, particularly in rapidly evolving digital markets, and sets the stage for the emergence of new approaches like the New Brandeis movement. In considering the Live Nation-Ticketmaster case, this historical context helps to understand the complex landscape of antitrust enforcement and the challenges in addressing anticompetitive behavior today. V. The New Brandeis Movement: Redefining Antitrust for the Modern Era The landscape of antitrust enforcement is undergoing a fundamental shift as new perspectives challenge long-held assumptions about competition law. The limitations of the Chicago School approach, particularly evident in cases like Microsoft and Trinko , have sparked a reimagining of antitrust’s fundamental purposes and tools. As University of Michigan Law Professor Daniel Crane noted recently, “the bipartisan consensus that antitrust should solely focus on economic efficiency and consumer welfare has quite suddenly come under attack from prominent voices [from the political left and right] calling for a dramatically enhanced role for antitrust law in mediating a variety of social, economic, and political friction points, including employment, wealth inequality, data privacy and security, and democratic values.” At the heart of this antitrust approach evolution lies a debate between the traditional consumer welfare-focused approach and the emerging New Brandeis movement. For decades, the standard approach has emphasized consumer welfare as the primary goal, focusing on economic efficiency and preventing practices that directly harm consumers through higher prices, reduced output, or decreased innovation. This framework has generally led to a more permissive attitude toward mergers and a higher bar for finding antitrust violations. In contrast, the New Brandeis movement, championed by figures like FTC Chairwoman Lina Khan, advocates for a broader understanding of antitrust law’s goals. This perspective, sometimes critically dubbed “hipster antitrust,” contends that enforcement should consider additional factors such as market structure, the distribution of economic power, and the impact on workers, small businesses, and political democracy. The movement’s proponents have been particularly vocal about the need to reassess antitrust approaches in the context of the digital economy, expressing concern over the power wielded by large tech platforms. Lina Khan, a prominent figure in contemporary antitrust discourse, has developed an extensive body of work articulating the principles of the New Brandeis movement. In her article “The New Brandeis Movement: America’s Antimonopoly Debate,” Khan outlines this approach, which draws inspiration from Justice Louis Brandeis’s support of “America’s Madisonian traditions—which aim at a democratic distribution of power and opportunity in the political economy.” The movement represents a significant departure from the Chicago School of antitrust thinking. While the Chicago School emphasized efficiency, prices, and consumer welfare, the New Brandeis approach advocates for a return to a market structure-oriented competition policy. Key tenets include viewing economic power as intrinsically tied to political power, recognizing that some industries naturally tend towards monopoly and require regulation, emphasizing the structures and processes of competition rather than just outcomes, and rejecting the notion of natural market “forces” naturally leading to optimal economic outcomes or consumer welfare, instead understanding markets as fundamentally shaped and structured by law and policy. In her article “The Ideological Roots of America’s Market Power Problem,” Khan further critiques the current antitrust framework, arguing that it has weakened enforcement and allowed high concentration of market power across sectors. She asserts that addressing this issue requires challenging the ideological underpinnings of the current framework, writing, “Identifying paths for greater enforcement within a framework that systematically disfavors enforcement will fall short of addressing the scope of the market power problem we face today.” Ultimately, Khan and other New Brandeis proponents argue for a fundamental rethinking of antitrust’s goals and methods, advocating a return to its original purpose of distributing economic power and preserving democratic values. Building upon her critique of current antitrust frameworks, Khan has written extensively about the unique challenges posed by big tech companies, arguing that traditional enforcement methods are inadequate to address their market power. In her influential article “Amazon’s Antitrust Paradox,” Khan contends that the current antitrust framework is ill-equipped to tackle the anticompetitive effects of digital platforms like Amazon. These platforms, she argues, can leverage their market power and access to data to engage in predatory pricing, disadvantage rivals, and entrench their dominance. Khan writes in the abstract, “This Note argues that the current framework in antitrust—specifically its pegging competition to ‘consumer welfare,’ defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output.” The article explains that despite Amazon’s massive growth, it generates low profits, often pricing products below cost and focusing on expansion rather than short-term gains. This strategy has allowed Amazon to expand far beyond retail, becoming a major player in various sectors including marketing, publishing, entertainment, hardware manufacturing, and cloud computing. Khan argues that this positions Amazon as a critical platform for many other businesses. She further elaborates, “First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.” Khan argues that in platform markets like Amazon's, predatory pricing can be rational even if product prices appear to be at market rates. This is because the goal is not immediate profit, but rather to rapidly expand market share and establish dominance. The company can sustain short-term losses or razor-thin margins on product sales because the real value lies in becoming the dominant platform, which can lead to long-term profitability through various means such as data collection. Traditional antitrust doctrine, however, often assumes that below-cost pricing is irrational unless the company can quickly recoup its losses through higher prices, which may not apply in these complex, multi-sided markets. This creates a “paradox” where Amazon’s practices may be anticompetitive, yet they escape scrutiny under existing regulations. To address Amazon’s market power, one of Khan’s major suggestions includes restoring traditional antitrust and competition policy principles to its more structure-oriented approach. Khan’s influential academic critiques of current antitrust frameworks, particularly her analysis of Amazon’s market power, laid the groundwork for her approach as FTC chair, where she has sought to translate these ideas into concrete enforcement actions. Since Lina Khan’s appointment as chair of the FTC in 2021 by President Joe Biden, the agency has embarked on a more aggressive approach to antitrust enforcement, challenging some of America’s largest corporations and implementing significant policy shifts. This new direction has yielded mixed results and sparked debates about the future of competition policy in the United States. Khan’s FTC has increased scrutiny of Big Tech, filing an amended antitrust complaint against Facebook (Meta) that challenges its acquisitions of Instagram and WhatsApp, and suing to block Microsoft’s acquisition of Activision Blizzard, citing competition concerns in the video game industry. The agency has also initiated actions against other tech giants like Amazon. Under Khan’s leadership, the FTC has implemented stricter merger enforcement, including a more aggressive approach to reviewing mergers, particularly vertical mergers. The agency withdrew the 2020 Vertical Merger Guidelines, signaling skepticism towards vertical integration, and revised merger guidelines in collaboration with the Department of Justice. There’s also been an increased focus on “killer acquisitions” where large companies buy potential competitors. Khan has emphasized structural remedies over behavioral ones, advocating for more dramatic interventions like breaking up companies in certain cases. Additionally, recognizing the growing importance of data as a competitive asset, the FTC has integrated privacy and data protection concerns into its antitrust approach. For instance, the agency pursued a case against data broker Kochava for selling sensitive geolocation data, highlighting how control over user data can contribute to market power and potentially anticompetitive practices in the digital economy. The implementation of Khan’s approach has seen both successes and setbacks. Partial victories include the FTC v. Facebook (Meta) case, where the court allowed a revised complaint to proceed, and the FTC v. Illumina/Grail case, where the agency successfully challenged a vertical merger, albeit on largely traditional antitrust grounds. However, the FTC faced a setback when its attempt to block Meta’s acquisition of Within Unlimited was rejected. Ongoing challenges persist as courts have shown varying degrees of receptiveness to the expanded view of antitrust harm. As of April 2024, there had been no definitive high-level court ruling fully endorsing or rejecting the New Brandeis approach, with many decisions still relying heavily on the consumer welfare standard. Khan also faces political opposition and challenges to her rule-making initiatives. While Khan has successfully shifted the FTC’s focus towards more aggressive antitrust enforcement and brought increased attention to issues like data privacy and labor market effects, the legal and practical adoption of the New Brandeis philosophy remains a work in progress. The evolving legal landscape sets the stage for analyzing how future cases, such as potential actions against Ticketmaster, might proceed under this new, more expansive view of antitrust enforcement. VI. The Live Nation-Ticketmaster Case: A Critical Analysis of Market Power and Competitive Effects In May 2024, the DOJ, in addition to 30 state and district attorneys general, filed a civil antitrust lawsuit against Live Nation Entertainment Inc. and its wholly owned subsidiary Ticketmaster “for monopolization and other unlawful conduct that thwarts competition in markets across the live entertainment industry.” More specifically, the DOJ accused Live Nation for violating Section 2 of the Sherman Act. In a subsequent press release, the DOJ highlighted several key issues resulting from Live Nation-Ticketmaster’s conduct. The DOJ argued that the company’s practices have led to a lack of innovation in ticketing, higher prices for U.S. consumers compared to other countries, and the use of outdated technology. Further, the DOJ asserted that Live Nation-Ticketmaster “exercises its power over performers, venues, and independent promoters in ways that harm competition” and “imposes barriers to competition that limit the entry and expansion of its rivals.” The lawsuit, which calls for structural relief – primarily the breakup of Live Nation and Ticketmaster – aims to reintroduce competition in the live concert industry, offer fans better options at more affordable prices, and create more opportunities for musicians and other performers at venues. The DOJ claims Live Nation-Ticketmaster uses a “flywheel” business model that self-reinforces its market dominance. This model involves using revenue from fans and sponsorships to secure exclusive deals with artists and venues, creating a cycle that excludes competitors. The complaint outlines several anti-competitive practices, including: partnering with potential rival Oak View Group to avoid competition, threatening retaliation against venues working with competitors, using long-term exclusive contracts with venues, restricting artists’ venue access unless they use Live Nation’s promotion services, and acquiring smaller competitors. The DOJ argues these practices create barriers for rivals to compete fairly. Live Nation Entertainment is the world’s largest live entertainment company, controlling numerous venues and generating over $22 billion in annual revenue globally. The DOJ’s action aims to address these alleged monopolistic practices in the live entertainment industry. Attorney General Merrick B. Garland said, “We contend that Live Nation uses illegal and anti-competitive methods to dominate the live events industry in the U.S., negatively impacting fans, artists, smaller promoters, and venue operators. This dominance leads to higher fees for fans, fewer concert opportunities for artists, reduced chances for smaller promoters, and limited ticketing options for venues. It’s time to break up Live Nation-Ticketmaster.” Beyond traditional market control, Live Nation’s monopolistic position is further entrenched by its significant data advantages, which raise additional competitive and privacy concerns. Through its ticketing operations and venue management, Live Nation amasses vast amounts of consumer data, including purchasing habits, musical preferences, and demographic information. This data not only enhances Live Nation’s ability to target marketing and adjust pricing strategies but also creates a major barrier to entry for potential competitors who lack access to such comprehensive consumer insights. Moreover, the company’s control over this data raises privacy concerns, as consumers may have limited understanding of how their information is being used or shared across Live Nation’s various business segments. These issues mirror broader debates in the digital age about the role of data in maintaining market power, with parallels to concerns raised about tech giants like Google and Facebook. As such, any antitrust action against Live Nation must consider not only traditional measures of market power but also the competitive advantages and potential privacy implications of its data practices. This aspect of the case underscores the need for antitrust enforcement to evolve in response to the increasing importance of data in modern business models. Notably, the DOJ focuses on Live Nation-Ticketmaster’s anticompetitive tactic of threatening and retaliating against venues that work with rivals. In the press release, the DOJ writes, “Live Nation-Ticketmaster’s power in concert promotions means that every live concert venue knows choosing another promoter or ticketer comes with a risk of drawing an adverse reaction from Live Nation-Ticketmaster that would result in losing concerts, revenue, and fans.” This directly violates the terms of the 2010 merger agreement, in which LiveNation was prohibited from retaliating against venues that use competing ticketing services. Considering that the current lawsuit’s main goal is the breakup of Ticketmaster and Live Nation, there exists an undeniable irony that the DOJ is seeking to undo their own actions (approving the merger in 2010). The head of Jones Day’s antitrust practice Craig Waldman said, “The DOJ is breaking out a really big gun here — seeking to blow up a company that was created with its approval. That looms large even though the DOJ has and will continue to try to frame Live Nation’s conduct as going well beyond the scope of the merger.” In hindsight, it is clear that the DOJ’s approval of the 2010 merger was an egregious mistake. Vice president and director of competition policy at the Progressive Policy Institute Diana Moss said, “The Live Nation-Ticketmaster merger was allowed to proceed in 2010, but the decision was an abject failure of antitrust enforcement. Instead of blocking the merger, the DOJ required the company, then with an 80% share of the ticketing market, to comply with ineffective conditions.” The continued anticompetitive practices and market dominance of Live Nation-Ticketmaster after the approved merger demonstrate that behavioral remedies were insufficient to protect competition. As such, structural remedies, specifically breaking up the company, are necessary to restore competition in the live entertainment industry. That extensive pushback and criticism of the merger took place at the time of its approval highlights the limited scope and approach of antitrust enforcement, particularly when it comes to mergers. The Live Nation-Ticketmaster case will proceed in New York’s Southern District, known for its slow litigation process, potentially delaying a trial until late 2026. In its defense, Live Nation argues that it does not hold a monopoly, claiming that its profit margins are low and that ticket prices are influenced more by factors like artist popularity and secondary ticketing markets than by its own practices. Live Nation contends that the efficiencies achieved by merging with Ticketmaster benefit the industry by offering better services and prices compared to separating the companies. The company emphasizes that its vertical integration—combining promotion and ticketing services—creates a more efficient and artist-friendly business model. Live Nation also asserts that the secondary ticketing market, rather than its own practices, is primarily responsible for high ticket prices. The case will scrutinize whether the efficiencies claimed by Live Nation justify its market control or if the harm to competition outweighs these benefits. The DOJ’s push for a breakup, and refusal to settle for anything less than a breakup, reflects the relative success of the New Brandeis movement, particularly when considering the FTC’s revised merger guidelines in collaboration with the DOJ. When analyzed through the lens of the Grinnell test, Live Nation’s conduct clearly meets both prongs for monopolization under Section 2 of the Sherman Act. First, Live Nation undoubtedly possesses monopoly power in the relevant markets of concert promotion and ticketing. With an estimated 80% market share in ticketing for major concert venues and its dominant position in concert promotion, Live Nation far exceeds the typical thresholds courts have used to identify monopoly power. The company’s ability to impose high fees, dictate terms to artists and venues, and persistently maintain its market position despite widespread consumer dissatisfaction further evidences its monopoly power. Second, Live Nation has willfully acquired and maintained this power through exclusionary practices, not merely through superior products or business acumen. The DOJ’s complaint outlines numerous anti competitive tactics, including threatening retaliation against venues that use competing services, leveraging its control over artists to pressure venues, and using long-term exclusive contracts to lock out competitors. These practices go well beyond legitimate competition based on merit. Moreover, Live Nation strategic acquisitions of potential competitors and its alleged collusion with Oak View Group to avoid competition further demonstrate its willful maintenance of monopoly power. The company’s “flywheel” business model, while potentially efficient, serves to entrench its dominance across multiple markets in ways that foreclose competition. Thus, Live Nation’s conduct satisfies both prongs of the Grinnell test, strongly supporting the DOJ’s case for illegal monopolization. It’s important to note, however, that while the Grinnell test remains a fundamental framework cited in monopolization cases, its application in modern antitrust law has evolved and become more nuanced. In recent decades, courts have increasingly used the Grinnell test as a starting point rather than a definitive standard. The test is now supplemented with more sophisticated economic analyses. Therefore, while the Grinnell test will likely be referenced in the Live Nation case, the court's analysis is expected to be more comprehensive, potentially incorporating more recent precedents and economic theories to fully capture the nuances of Live Nation’s market position and conduct. The Live Nation-Ticketmaster case illuminates several fundamental limitations in current antitrust doctrine. First, the case demonstrates how the Chicago School’s permissive approach to vertical mergers, embedded in Clayton Act enforcement, systematically underestimates the long-term competitive threats posed by vertical integration in platform markets. Second, the case exposes the inherent weakness of behavioral remedies in addressing vertical merger concerns. The failure of the 2010 settlement’s behavioral conditions—despite their specificity and ongoing oversight—suggests that such remedies are fundamentally inadequate for controlling the conduct of vertically integrated firms with substantial market power. Third, and perhaps most significantly, the case reveals the challenging burden facing regulators under Section 2 of the Sherman Act once a vertically integrated entity has established market dominance. Even with clear evidence of exclusionary conduct, proving harm under current Section 2 doctrine requires navigating complex questions about market definition and competitive effects that may not fully capture the subtle ways in which vertical integration can entrench market power. The Consumer Welfare Standard, which has dominated antitrust analysis since the 1980s, is inadequate in fully capturing the anticompetitive harm caused by Live Nation’s practices. While this standard primarily focuses on consumer prices and output, it fails to account for the multifaceted nature of competition in the live entertainment industry. Certainly, the high ticket prices and fees imposed by Live Nation are relevant concerns under this framework. However, this narrow focus obscures the broader and more insidious effects of Live Nation’s market dominance. For instance, the standard doesn’t adequately address the reduced choices faced by venues, who often feel compelled to contract with Live Nation for fear of losing access to popular acts. Similarly, it fails to capture the constraints placed on artists, who may find their touring options limited by Live Nation’s control over major venues and promotion services. The standard also struggles to account for the barriers to entry the industry created by Live Nation’s vertically integrated structure and exclusive contracts, which stifle potential competitors and innovative business models in the ticketing and promotion markets. Moreover, the Consumer Welfare Standard’s short-term focus on prices neglects long-term impacts on innovation, diversity, and the overall health of the live entertainment ecosystem. It fails to account for how one company’s dominance can lead to less diverse music options and harm smaller venues and independent promoters who are crucial for supporting new artists. By focusing mainly on short-term price effects, the standard overlooks the broader, long-term damage to competition in the industry. This limitation of the Consumer Welfare Standard in the Live Nation case underscores the need for a more comprehensive approach to antitrust analysis, one that aligns more closely with the broader concerns of the New Brandeis movement. Building on the limitations of the Consumer Welfare Standard and the evolving application of the Grinnell test, it becomes clear that a more comprehensive approach to antitrust enforcement is necessary in the Live Nation case. The failure of the 2010 behavioral remedies further underscores this need. Despite prohibitions on retaliatory practices and requirements to license ticketing software to competitors, Live Nation has continued to dominate the market and engage in exclusionary conduct. This persistence of anticompetitive behavior, even under regulatory oversight, demonstrates that more robust, structural solutions are required. In retrospect, it is evident that the DOJ should have never approved the merger in the first place, as the vertical integration of Live Nation and Ticketmaster created a entity with unprecedented market power and clear incentives for anticompetitive behavior. In light of these considerations, the DOJ should argue for a full structural separation of Live Nation and Ticketmaster as the primary remedy. This breakup would reintroduce genuine competition into both the concert promotion and ticketing markets, addressing the root causes of Live Nation’s market power more effectively than behavioral conditions. To ensure a competitive landscape post-separation, the court should also consider supplementary measures. These could include prohibiting exclusive deals with venues and imposing limits on the percentage of a market’s concert promotion that Live Nation can control. By advocating for these comprehensive structural changes, the DOJ can align its approach with the more aggressive, market structure-focused enforcement advocated by the New Brandeis movement. This approach not only addresses the immediate concerns in the live entertainment industry but also sets a potential precedent for future antitrust cases in similarly complex, vertically integrated industries. It recognizes that in today’s interconnected markets, protecting competition requires looking beyond short-term price effects to consider the broader ecosystem of industry participants, from artists and venues to emerging competitors and consumers. VII. Conclusion The Live Nation-Ticketmaster case serves as a stark illustration of the inadequacies of traditional antitrust enforcement in addressing the complexities of modern markets. The DOJ’s original approval of the 2010 merger, despite widespread criticism and concerns, highlights the limitations of the consumer welfare-focused approach and the ineffectiveness of behavioral remedies in curbing anti competitive practices. The subsequent dominance of Live Nation in the live entertainment industry, characterized by its “flywheel” business model and alleged exclusionary practices, demonstrates the need for a more comprehensive and aggressive approach to antitrust enforcement. This case represents a critical juncture in the evolution of antitrust law, potentially marking a shift towards the more expansive view advocated by the New Brandeis movement. The DOJ’s pursuit of structural remedies, specifically the breakup of Live Nation and Ticketmaster, signals a recognition that protecting competition in today’s interconnected markets requires looking beyond short-term price effects to consider the broader ecosystem of industry participants. As such, the outcome of this case will have far-reaching implications for future antitrust enforcement, particularly in industries characterized by vertical integration and data-driven market power. It may set a precedent for how antitrust authorities approach complex, multi-faceted monopolies in the digital age, potentially reshaping the landscape of competition law for years to come. Ultimately, the Live Nation case underscores the urgent need for antitrust law to evolve in response to the changing nature of market power, ensuring that it remains an effective tool for promoting competition, innovation, and consumer welfare in the 21st-century economy. References Abad-Santos, Alex. “How Disappointed Taylor Swift Fans Explain Ticketmaster’s Monopoly.” Vox. Last modified November 21, 2022. https://www.vox.com/culture/2022/11/21/23471763/taylor-swift-ticketmaster-monopoly. Abbott, Alden. “Will the Antitrust Lawsuit against Live Nation Break Its Hold on Ticketmaster?” Forbes. Last modified May 28, 2024. https://www.forbes.com/sites/aldenabbott/2024/05/28/will-the-justice-departments-monopolization-lawsuit-kill-live-nation/. Abovyan, Kristina, and Quinn Scanlan. “FTC Is ‘just Getting Started’ as It Takes on Amazon, Meta and More, Chair Lina Khan Says.” ABC News , May 5, 2024. https://abcnews.go.com/Politics/ftc-started-takes-amazon-meta-chair-lina-khan/story?id=109928219. “Antitrust Law Basics – Section 2 of the Sherman Act.” Thomas Reuters. Last modified May 17, 2023. https://legal.thomsonreuters.com/blog/antitrust-law-basics-section-2-of-the-sherman-act/. “The Antitrust Laws.” U.S. Department of Justice. Accessed December 20, 2023. https://www.justice.gov/atr/antitrust-laws-and-you#:~:text=The%20Sherman%20Antitrust%20Act,or%20markets%2C%20are%20criminal%20violations. Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 JUSTIA (10th Cir. June 19, 1985). https://supreme.justia.com/cases/federal/us/472/585/. “A Brief Overview of the ‘New Brandeis’ School of Antitrust Law.” Patterson Belknap. Last modified November 8, 2018. https://www.pbwt.com/antitrust-update-blog/a-brief-overview-of-the-new-brandeis-school-of-antitrust-law. Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., 509 JUSTIA (4th Cir. Mar. 29, 1993). https://supreme.justia.com/cases/federal/us/509/209/. “Competition and Monopoly: Single-Firm Conduct under Section 2 of the Sherman Act : Chapter 1.” U.S. Department of Justice. https://www.justice.gov/archives/atr/competition-and-monopoly-single-firm-conduct-under-section-2-sherman-act-chapter-1#:~:text=Section%202%20of%20the%20Sherman%20Act%20makes%20it%20unlawful%20for,foreign%20nations%20.%20.%20.%20.%22. “Court Rejects FTC’s Bid to Block Meta’s Proposed Acquisition of VR Fitness App Developer.” Crowell. https://www.crowell.com/en/insights/client-alerts/court-rejects-ftcs-bid-to-block-metas-proposed-acquisition-of-vr-fitness-app-developer. “Federal Trade Commission and Justice Department Release 2023 Merger Guidelines.” Federal Trade Commission. Accessed December 18, 2023. https://www.ftc.gov/news-events/news/press-releases/2023/12/federal-trade-commission-justice-department-release-2023-merger-guidelines. Hovenkamp, Herbert. “Framing the Chicago School of Antitrust Analysis.” University of Pennsylvania Carey Law School 168, no. 7 (2020). https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=3115&context=faculty_scholarship. Hovenkamp, Herbert J. “The Rule of Reason.” Penn Carey Law: Legal Scholarship Repositary , 2018. https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=2780&context=faculty_scholarship. Jones, Callum. “‘She’s Going to Prevail’: FTC Head Lina Khan Is Fighting for an Anti-monopoly America.” The Guardian , March 9, 2024. https://www.theguardian.com/us-news/2024/mar/09/lina-khan-federal-trade-commission-antitrust-monopolies. Katz, Ariel. “The Chicago School and the Forgotten Political Dimension of Antitrust Law.” The University of Chicago Law Review , 2020. https://lawreview.uchicago.edu/print-archive/chicago-school-and-forgotten-political-dimension-antitrust-law. Khan, Lina. “Amazon’s Antitrust Paradox.” The Yale Law Journal 126, no. 3 (2017). https://www.yalelawjournal.org/note/amazons-antitrust-paradox. Khan, Lina. “The Ideological Roots of America’s Market Power Problem.” The Yale Law Journal 127 (June 4, 2018). https://www.yalelawjournal.org/forum/the-ideological-roots-of-americas-market-power-problem. Khan, Lina. “The New Brandeis Movement: America’s Antimonopoly Debate.” Journal of European Competition Law & Practice 9, no. 3 (2018): 131-32. https://doi.org/10.1093/jeclap/lpy020. Koenig, Bryan. “DOJ Has a Long Set to Play against Live Nation-Ticketmaster.” Law360. Last modified May 23, 2024. https://www.crowell.com/a/web/4TwXzF6sFW49adb3eTjznR/doj-has-a-long-set-to-play-against-live-nation-ticketmaster.pdf. Layton, Roslyn. “Live Nation's Anticompetitive Conduct Is a Problem for Security.” ProMarket. Last modified June 25, 2024. https://www.promarket.org/2024/06/25/live-nations-anticompetitive-conduct-is-a-problem-for-security/. Levine, Jay L. “1990s to the Present: The Chicago School and Antitrust Enforcement.” Porterwright. Last modified June 1, 2021. https://www.antitrustlawsource.com/2021/06/1990s-to-the-present-the-chicago-school-and-antitrust-enforcement/. Markham, William. “How the Consumer-Welfare Standard Transformed Classical Antitrust Law.” Law Offices of William Markham, P.C. Last modified 2021. https://www.markhamlawfirm.com/wp-content/uploads/2023/06/How-the-Consumer-Welfare-Standard-Transformed-Classical-Antitrust-Law.final_.pdf. McKenna, Francine. “What Made the Chicago School so Influential in Antitrust Policy?” Chicago Booth Review. Last modified August 7, 2023. https://www.chicagobooth.edu/review/what-made-chicago-school-so-influential-antitrust-policy. Office of Public Affairs - U.S. Department of Justice. “Justice Department Sues Live Nation-Ticketmaster for Monopolizing Markets across the Live Concert Industry.” News release. March 23, 2024. https://www.justice.gov/opa/pr/justice-department-sues-live-nation-ticketmaster-monopolizing-markets-across-live-concert. “Sherman Antitrust Act.” Britannica. Accessed August 5, 2024. https://www.britannica.com/biography/John-Sherman. “Sherman Anti-Trust Act (1890).” National Archives. https://www.archives.gov/milestone-documents/sherman-anti-trust-act. “The Ticketmaster/LiveNation Merger: What Does It Mean for Consumers and the Future of the Concert Business?: Hearings Before the Committee on the Judiciary, Subcommittee on Antitrust, Competition Policy and Consumer Rights (2009) (statement of David A. Balto). https://www.judiciary.senate.gov/imo/media/doc/balto_testimony_02_24_09.pdf. Treisman, Rachel. “Taylor Swift Says Her Team Was Assured Ticket Demands Would Be Met for Her Eras Tour.” npr. Last modified November 18, 2022. https://www.npr.org/2022/11/17/1137465465/taylor-swift-ticketmaster-klobuchar-tennessee. United States v. Microsoft Corp., 584 JUSTIA (Apr. 17, 2018). https://supreme.justia.com/cases/federal/us/584/17-2/. “U.S. v. Microsoft: Court’s Findings of Fact.” U.S. Department of Justice. https://www.justice.gov/atr/us-v-microsoft-courts-findings-fact. Varney, Christine A. “The TicketMaster/Live Nation Merger Review and Consent Decree in Perspective.” Speech presented at South by Southwest, March 18, 2010. U.S. Department of Justice. Last modified March 18, 2010. https://www.justice.gov/atr/speech/ticketmasterlive-nation-merger-review-and-consent-decree-perspective. Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, 540 JUSTIA (Oct. 2003). https://supreme.justia.com/cases/federal/us/540/398/.

  • Burden of Innocence | brownjppe

    The Burden of Innocence: Arendt’s Understanding of Totalitarianism through its Victims Elena Muglia Author Emerson Rhodes Meruka Vyas Editors Hannah Arendt set out to describe an ideology and government that burst past understandings of politics, morality, and the law asunder. In Origins of Totalitarianism , Arendt argues that totalitarianism could not fit into previous political typologies. Instead, it navigates between definitions of political regimes like tyranny and authoritarianism, as well as distinctions historically made between lawlessness and lawfulness, arbitrary and legitimate power. Even then, Arendt holds on to the idea that totalitarianism can be described and analyzed despite escaping traditional understanding as a political ideology and system. In the preface of the first edition, Arendt expresses this hope, writing that Origins was: “Written out of the conviction that it should be possible to discover the hidden mechanics by which all traditional elements of our political and spiritual world were dissolved into a conglomeration where everything seems to have lost specific value and has become unrecognizable for human comprehension, unusable for human purpose.” One of the traditional elements of our “political and spiritual” world that she inquires about are questions of innocence, guilt, and responsibility. How can these concepts, which have both moral and legal implications, be applied and understood in the case of Nazi Germany, a regime void of morality and legality? Many political theorists have explored Arendt’s understanding of guilt in her report Eichmann in Jerusalem . In the report, Arendt utilizes Adolf Eichmann’s case—a Nazi Party official who helped carry out the Final Solution—to provide a concrete example of someone who is guilty but does not fit traditional understandings of what is required to be criminally guilty. Alan Norrie points out that Arendt exposes the tension between Eichmann’s lack of criminal intent, mens rea , and his criminal and evil actions (Norrie 2008. 202). The totality of totalitarianism complicates his criminal guilt, as Nazi Germany rendered every member of society complicit in its crimes. To unpack this complex nexus of guilt and responsibility, Iris Young looks at two of Arendt’s essays; “Organized Guilt and Universal Responsibility” and “Collective Responsibility” (Young 2011, 90). Young outlines how Arendt understands guilt as centered on the self, while responsibility implies a relationship with the world and membership in a political community (Young 2011, 78). Guilt arises from an objective consequence of somebody’s actions (Young 2011, 79) and is not a product of someone’s subjective state. With this understanding, everybody in Nazi Germany was responsible (irrespective of whether they took up political responsibility), but not everybody was guilty. Those who acted publicly against the Nazi Regime, like the Scholl siblings, took up political responsibility in a positive sense (Young 2011, 91). Richard Bernstein, who also discusses Eichmann, shares this understanding with Young—Eichmann is criminally guilty, but bystanders are not. Bernstein, however, elucidates that the bystanders’ responsibility is imperative to understand because their complicity was an “essential condition for carrying out the Final Solution” (Bernstein 1999, 165). By focusing on the areas of guilt and responsibility and primarily looking at Eichmann, however, these scholars leave a theoretical gap in understanding the relationship between the victims—the stateless and Jewish people for Nazi Germany—and totalitarian ideology. These groups lack political responsibility within the totalitarian system because their innocence implies a separation from the world and a political community. In her essay “Collective Responsibility,” Arendt notes that the twentieth century has created a category of men who “cannot be held politically responsible for anything” and are “absolutely innocent.” The innocence of these victims and their apoliticality strikes at the heart of why Arendt postulates that totalitarian ideology and terror constitute a novel form of government—“[it] differs essentially from other forms of political oppression known to us such as despotism, tyranny and dictatorship.” Totalitarianism targets victims en masse , but their status as victims is not based on any action they take against the regime. While Norrie, Young, and Bernstein all address that Arendt thinks that any “traditional” conception of the relationship between law and justice cannot be applied to totalitarianism directly, by focusing primarily on Eichmann, they are missing and understanding of a group of people that allowed totalitarianism to explode these notions. By tracking and parsing through Arendt’s understanding of the innocents and innocence in Origins of Totalitarianism and placing it in conversation with her understanding of action in The Human Condition, I elaborate on the unique and lack thereof, political relationship between totalitarian ideology and the innocents. I argue that the condition of innocence of the victims represents the essence of totalitarianism’s unique form of oppression and negation of the human condition. The positioning of the innocents in a totalitarian society acts as a lens for how totalitarianism aims to reshape traditional notions of political, moral, and legal personhood. I demonstrate this by first outlining what created fertile ground in the 20th century for the condition of rightlessness of the innocents. Second, I highlight how the targeting of innocents in concentration camps lies at the heart of totalitarianism’s destruction of the juridical person—someone who is judged based on their actions. Third, I argue that by bending any notions of justice, totalitarianism destroys the moral person, a destruction that is best expressed in the innocents’ lack of internal freedom. Finally, I argue that all these components entail severing the victims from a world where they can appear and be recognized as humans. Overall, I contend that while many of the techniques unleashed on the innocents apply, to an extent, to everyone under totalitarianism, including people like Eichmann, the innocents represent the full realization of totalitarianism’ attempt to alter the essence of a political and acting person. To understand how totalitarian regimes created a mass of ‘superfluous’ people who existed outside the political realm, it is first necessary to highlight what conditions Arendt thinks sowed fertile ground for totalitarian domination and terror in the first place. A crucial condition is rooted in the failures of the nation-state in dealing with the new category of stateless people in the interwar period in Europe. Following WWI, multiethnic empires, like the Austro-Hungarian and Ottoman empires, dissolved, which led Europe to resort to the familiar nation-state principle—presuming that each nationality should establish its state. As Ayten Gundogdu writes, “the unquestioning application of this principle turned all those who were ‘ejected from the old trinity of state-people-territory’ into exceptions to the norm” (Gundogdu 2014, 31). These exceptions to the norm, as were Jewish people, could not be repatriated anywhere because they did not have a nation. Instead of integrating these minorities and making them fully-fledged political members, policies like Minority Treaties codified minorities as exceptions to the law. The massive scale of refugees that existed outside a political community left a set of people without any protections apart from the ones that the state gave out of their own prerogative and charitable actions. This stateless crisis crystallized, for Arendt, the aporia of human rights—even though human rights guarantee universal rights, irrespective of any social and political category, they are enforced based on political membership. Human rights end up being the rights of citizens, leading the stateless to a condition of “absolute rightlessness.” This condition of rightlessness does not entail the loss of singular rights—just like the law temporarily deprives a criminal of the right to freedom—but a deprivation of what Arendt calls the right to have rights. Defined by Arendt as a right to live “in a framework where one is judged by one’s actions and opinions.” Instead of being judged based on actions or opinions, the stateless are judged based on belonging to a group outside the nation. This innocence, an inability to be judged based on one’s deeds and words, is the defining mark of the statelessness’ loss of a “political status” (Arendt 1951, 386), which primes these groups of people for the particular form of oppression that totalitarianism entails. While the stateless and their condition of rightlessness was constructed even before Nazi Germany, the existence and the continuous creation of a mass of innocents lies at the core of the raison d’étre of totalitarian politics. According to Arendt, totalitarianism operates based on a law of Nature and History, which has “mankind” as an end product, an “‘Aryan’ world empire” for Hitler. Mankind becomes the “embodiment” of law and justice. Jewish people, under Nazi Germany, are portrayed as the “objective enemy” halting nature’s progression, whereby every stage of terror is seen as a further development that is closer to achieving the development of the ultimate human. This continuous need to follow a Darwinian law of nature leads Arendt to define one of totalitarianism’ defining features as the law of movement: the only way that totalitarian regimes can justify their existence, expansion, and domination, and it relies almost entirely on the group of innocents. The innocents are crucial components of the concentration camps because they are placed there alongside criminals who have committed an action. If they only targeted “criminals” or those that committed particular actions, the Nazi party would have scant logic to fulfill its law of movement. The “innocents” are “both qualitatively and quantitatively the most essential category of the camp population.” in the sense that they exist in an “enormous” capacity and will always be present in society. Totalitarianism relies on innocents because their existence removes any “calculable punishment for definite offenses.” Totalitarian politics aim, eventually, to turn everyone into an innocent mass that could be targeted, not because of their actions, but their existence. Even criminals were often sent to concentration camps only after they had completed their prison sentences, meaning they were going there not because of their criminal activity but rather arbitrarily, sacrificing a mass in favor of the laws of history and nature. The condition of rightlessness combined with total domination, exerted through the concentration camps, obliterates the juridical person for all the victims of totalitarianism. The juridical person is the foundation of modern understandings of law, constituting a person who bears rights and can exercise rights and who, in derogation of the law, faces proportional and predictable consequences. By destroying the juridical person and turning its victims into a mass of people who exist outside any legal framework and logic, totalitarianism operates beyond any previously conceived notions of justice. As Arendt explains: “The one thing that cannot be reproduced [in a totalitarian regime] is what made the traditional conceptions of Hell tolerable to man: the Last Judgment, the idea of an absolute standard of justice combined with the infinite possibility of grace. For in the human estimation, there is no crime and no sin commensurable with the everlasting torments of Hell. Hence the discomfiture of common sense, which asks: What crime must these people have committed in order to suffer so inhumanly? Hence also the absolute innocence of the victims: no man ever deserved this. Hence finally the grotesque haphazardness with which concentration camp victims were chosen in the perfected terror state: such punishment can, with equal justice and injustice, be inflicted on anyone .” By “traditional conceptions of Hell” tolerable to man, Arendt means a Hell where every individual will be judged based on their actions and nothing else on the day of the Last Judgment. Totalitarianism shatters this idea and any existence of an “absolute standard of justice” through the concentration camps, which creates Hell on earth but without any rightful last judgment. Even more importantly, because of these innocents and the arbitrariness and “haphazardness” of the way they are chosen, Arendt explains that state punishment can be “inflicted on anyone.” A tyranny targets the opponents of a regime or anyone who causes disorder, but totalitarianism cannot be understood through such a utilitarian lens. As Arendt points out in various places in Origins , without understanding totalitarianism’ “anti-utilitarian behavior.” it is difficult and impossible to understand its use in targeting people who commit no specific action against the regime. Concentration camps and terror materialize the law of movement like positive law materializes notions of justice in lawful governments. The guilty are innocents who stand in the way of movement. Totalitarianism does not only operate outside any traditional forms of legality and juridical personhood but also transcends any understanding of morality—the moral person is destroyed just as the juridical one is; and this is, once again, fully expressed through the treatment of innocents who become the ideal subject of totalitarianism. The ideal subject of totalitarianism lacks both internal and external freedom—which is precisely what is imposed on the victims. A lack of internal freedom implies an inability to distinguish right and wrong. As Arendt explains, “totalitarian terror,” in the concentration camps, achieves triumph when it cuts the moral person from “the individualist escape and in making the decisions of conscience questionable and equivocal.” The Nazi Regime achieved this by asking the innocent to make impossible decisions that involved balancing their own life and the ones of their families. This often involved a blurring of “the murderer and his victim.” by involving even the concentration camp inmates in the operations of the camp. Concerning this, Robert Braun talks about Primo Levi’s discussion of the complicated victim—explaining that those who survived the concentration camps are always seen as suspect because of these blurred lines (Braun 1994, 186). Arendt has a parallel opinion to Levi that focuses more on those victim’s subjective state, explaining that when they return to the “word of the living,” they are “assailed by doubts” regarding their truthfulness. The innocents represent the perfect totalitarian subject as their doubts represent an inability to distinguish between truth and falsehood, which Arendt describes as the “standards of thought.” What is most striking about the destabilization of conscience is that it results in an inability to a freezing effect and an inability to act. As Arendt explains, “Through the creation of conditions under which conscience ceases to be adequate and to do good becomes utterly impossible, the consciously organized complicity of all men in the crimes of totalitarian regimes is extended to the victims and thus made really total.” Regardless of what “good” entails, doing it entails committing an action that is for others. Doing good can be understood as analogous to how Young interprets Arendt’s understanding of political responsibility… further explaining how the victims are left to a condition of non-responsibility through their inability to both distinguish what is right and wrong, and act on it. The erasure of “acting” in totalitarianism gains new meaning, or rather a more comprehensive explanation, when looking at Arendt’s discussion of acting in The Human Condition. Arendt’s work in The Human Condition illuminates the full extent of why acting becomes impossible under totalitarianism, especially for its victims. As Nica Siegel explains, an essential aspect of her understanding of action in The Human Condition is the spatialized logic that grounds action in a space where one can “reveal their unique personal identities and make their appearance in the world.” Only in this way can an action take place as it has a “who”—a unique author—at its root, and thus has the potential to create new beginnings. With this understanding, totalitarianism is the antithesis of action for everyone, to an extent, but completely for the innocent. Totalitarianism removes their space to act internally—through the destruction of conscience explained in the previous section—and externally—removing any place to appear publicly. The innocent are removed from the rest simply by being in the concentration camps, isolated from everyone else but also from one another. This means that totalitarianism, in practice, removes any source and space for spontaneity. Arendt defines spontaneity in Origins almost identically to how she defines action in The Human Condition , saying that spontaneity is “man’s power to begin something new out of his resources, something that cannot be explained on the basis of reactions to environment or events.” This condition of the innocent also illuminates why creating new and making a political statement is impossible under totalitarianism. As Arendt explains, “no activity can become excellent if the world does not provide a proper space for its exercise.” As with many other tactics in totalitarianism, this lack of excellence and new beginnings is rooted in the fate of the innocents. Nobody’s actions can “become excellent” if they face the same consequences of the concentration camp as the mass of those who commit no action. This is why under totalitarianism, “martyrdom” becomes “impossible.” Just as totalitarianism assimilates criminals with innocents in their punishment, political actors are also assimilated to this category, as they are “deprived of the protective distinction that comes of their having done something,” just as the innocents are. What totalitarianism does to its victims is, therefore, a symptom of its wider perversion of human individuality and action in general. Even perpetrators like Eichmann lose their sense of individuality—A.J. Vetlesen has described the phenomenon as a double dehumanization between the victims and the perpetrator Every bureaucrat in Nazi Germany was replaceable and totalitarianism made them feel, paradoxically, “subjectively innocent,” in the sense that they do not feel responsible for their actions “because they do not really murder but execute a death sentence pronounced by some higher tribunal.” Jalusic argues that both aspects of humanization have in common, the “loss of the human condition.”, but what Jalusic misses is that Vetlesen, by arguing that it is the persecutors that dehumanize themselves to avoid personal responsibility and alienate themselves from their actions—thus going against the cog in the machine theory. The perpetrators retain a level of agency that is ultimately denied to the victims. The victims do not alienate themselves from their actions, as they cannot act in the first place. When Nazi officials send victims to the concentration camp, they lose any ability to appear and thus face a loss of the human condition, as Arendt describes in The Human Condition, “A life without speech and without action, on the other hand-and this is the only way of life that in earnest has renounced all appearance and all vanity in the biblical sense of the word-is, literally dead to the world; it has ceased to be a human life because it is no longer lived among men” The emphasis she places on action as being an essential part of living “among men” explains why, according to her, totalitarianism, unlike other forms of oppressive governments, transforms “human nature itself.” While she uses the term “human nature,” she makes a strict distinction between human nature and condition in The Human Condition , arguing that it is impossible for us to understand human nature without resorting to God or a deity. Even in Origins , when talking about human nature, she criticizes those, like the positivists, who see it as something fixed and not constantly conditioned by ourselves. In light of her understanding of the human condition, I argue that Arendt means that totalitarianism undermines an essential part of the human condition, not human nature. Arendt views the human condition, as opposed to human nature, as being rooted in plurality. By plurality, she means that each individual is uniquely different but also shares a means of communication with every other individual, and thus, the ability of each individual to make themselves known and engage with one another. With this in mind, “human plurality is the basic condition for both action and speech,” as each individual can make a statement and be understood by others. The treatment of victims and their innocence as their defining factor highlights that fellow humans can distort and condition crucial aspects of our human condition in favor of laws that pretend that humans can instill justice and nature on earth. To a degree, totalitarianism subjects everyone to the conditions of “innocence” that victims face. What distinguishes the victims from other agents under totalitarianism is that they demonstrate the ability of totalitarian ideology to instill a complete condition of innocence by playing a person entirely outside any political and legal realm and, by extension, outside of mankind. Innocence under totalitarianism is not a negative condition—in the sense of not having done anything, not taking action—but it is primarily a lack of positive freedom—the ability to do something and act. Arendt’s understanding of innocence elaborates on the unique condition of superfluousness under totalitarianism. This ‘superfluousness’ is justified through a legal and political doctrine that explodes past legal and normative frameworks by being based on movement instead of stability. The law of nature is in a constant process of Darwinian development, with the superfluous innocents as the sine qua non to keep going. A lot of what happens to the innocents, as their obliteration of a space to act, does happen to everyone under totalitarianism; however, the innocents bear the full expression of totalitarianism and fight past notions of moral, political, and legal personhood. The innocents are not only cut off from this personhood but also from what Arendt thinks it means to be human, as they represent an inability to do what human beings do, which is to create beginnings through spontaneous action. The unique condition of innocence that the victims of totalitarianism face exposes totalitarianism’s own legal and political theory. The Law of Nature that Nazi Germany espouses here cannot exist without the realization of a group of innocents who prove the nihilistic idea that humans can be sacrificed for perfected mankind. As Arendt explains, the concentration camps are where the changes in “human nature are tested.” We can only understand how totalitarianism could occur by looking at this unique political erasure. The terror and fate of the innocents act as proof for everyone in the totalitarian regime that they could be next. The status of the victims also sheds lights on the inexplicable deeds that Eichmann committed, as Arendt writes that one of the few, if not only one, discernible aspects of totalitarianism is that “radical evil has emerged in connection with a system in which all men have become equally superfluous.” Totalitarianism proves that it is fellow humans who are dehumanized, albeit to a different degree, who completely sever an individual’s ties from political and legal structures meant to protect them. This conclusion and elaboration of the peculiar form of oppression and domination of totalitarianism has pressing practical and theoretical implications for modern-day politics. As Arendt explains, totalitarianism is born from modern conditions, and so looking at how modern polities can and do create superfluousness can be a thermometer for descent into totalitarianism. After all, it is important to remember that statelessness in the 20th century came before totalitarianism’s domination and terror. References Arendt, Hannah. “Collective Responsibility.” Amor Mundi: Explorations in the Faith and Thought of Hannah Arendt , edited by S. J. James W. Bernauer, Springer Netherlands, 1987, pp. 43–50. Springer Link , https://doi.org/10.1007/978-94-009-3565-5_3. ---. Eichmann in Jerusalem: A Report on the Banality of Evil . Penguin Books, 2006. ---. The Human Condition: Second Edition . Edited by Margaret Canovan and a New Foreword by Danielle Allen, University of Chicago Press. University of Chicago Press , https://press.uchicago.edu/ucp/books/book/chicago/H/bo29137972.html. Accessed 8 May 2024. ---. The Origins of Totalitarianism . 1951. Penguin Classics, 2017. Benhabib, Seyla. “Judgment and the Moral Foundations of Politics in Arendt’s Thought.” Political Theory , vol. 16, no. 1, 1988, pp. 29–51. JSTOR , https://www.jstor.org/stable/191646. Bernstein, Richard J. “Responsibility, Judging, and Evil.” Revue Internationale de Philosophie , vol. 53, no. 208 (2), 1999, pp. 155–72. JSTOR , https://www.jstor.org/stable/23955549. Braun, Robert. “The Holocaust and Problems of Historical Representation.” History and Theory , vol. 33, no. 2, May 1994, p. 172. DOI.org (Crossref) , https://doi.org/10.2307/2505383. Gundogdu, Ayten. Rightlessness in an Age of Rights . Oxford University Press, 2015. DOI.org (Crossref) , https://doi.org/10.1093/acprof:oso/9780199370412.001.0001. Jalusic, Vlasta. “Organized Innocence and Exclusion: ‘Nation-States’ in the Aftermath of War and Collective Crime.” Social Research , vol. 74, no. 4, 2007, pp. 1173–200. JSTOR , https://www.jstor.org/stable/40972045. Norrie, Alan. “Justice on the Slaughter-Bench: The Problem of War Guilt in Arendt and Jaspers.” New Criminal Law Review , vol. 11, no. 2, Apr. 2008, pp. 187–231. DOI.org (Crossref) , https://doi.org/10.1525/nclr.2008.11.2.187. Siegel, Nica. “The Roots of Crisis: Interrupting Arendt’s Radical Critique.” Theoria: A Journal of Social and Political Theory , vol. 62, no. 144, 2015, pp. 60–79. JSTOR , https://www.jstor.org/stable/24719945. Vetlesen, Arne Johan. Evil and Human Agency: Understanding Collective Evildoing . 1st ed., Cambridge University Press, 2005. DOI.org (Crossref) , https://doi.org/10.1017/CBO9780511610776. Young, Iris Marion, and Martha Nussbaum. Responsibility for Justice . Oxford University Press, 2011. DOI.org (Crossref) , https://doi.org/10.1093/acprof:oso/9780195392388.001.0001.

  • Ronald Reagan and the Role of Humor in American Movement Conservatism

    Author Name < Back Ronald Reagan and the Role of Humor in American Movement Conservatism Abie Rohrig In this paper, I argue that analysis of Reagan’s rhetoric, and particularly his humor, illuminates many of the attitudes and tendencies of both conservative fusionism—the combination of traditionalist conservatism with libertarianism—and movement conservatism. Drawing on Ted Cohen’s writings on the conditionality of humor, I assert that Reagan’s use of humor reflected two guiding principles of movement conservatism that distinguish it from other iterations of conservatism: its accessibility and its empowering message. First, Reagan’s jokes were accessible in that they are funny even to those who disagree with him politically; in Cohen’s terms, his jokes were hermetic (requiring a certain knowledge to be funny), and not effective (requiring a certain feeling or disposition to be funny). The broad accessibility of Reagan’s humor reflected the need of movement conservatism to unify constituencies with varying political feelings and interests. Second, Reagan’s jokes were empowering—they presume and therefore posit the competence of their audience. Many of his jokes implied that if an average citizen were in charge of the government they could do a far better job than status quo bureaucrats. This tone demonstrated the tendency of movement conservatism to emphasize individual freedom and self-governance as a through line of its constituent ideologies. In the first part of this paper, I offer some historical and political context for movement conservatism, emphasizing the ideological influences of Frank Meyer and William F. Buckley as well as the political influence of Barry Goldwater. I then discuss how Reagan infused many of Meyer, Buckley, and Goldwater’s talking points with a humor that is both accessible and empowering. I will conclude by analyzing how Reagan’s humor was a concrete manifestation of certain principles of fusionism. Post-war conservatives found themselves in a peculiar situation: their school of thought had varying constituencies, each with different political priorities and anxieties. George Nash writes in The Conservative Intellectual Movement Since 1945 : “The Right consisted of three loosely related groups: traditionalists or new conservatives, appalled by the erosion of values and the emergence of a secular, rootless, mass society; libertarians, apprehensive about the threat of the State to private enterprise and individualism; and disillusioned ex-radicals and their allies, alarmed by international Communism” (p. 118). Conservative intellectuals like Frank Meyer and William F. Buckley attempted to synthesize conservative schools of thought into a coherent modern Right. In 1964, Meyer published What is Conservatism? , an anthology of conservative essays that highlight the similarities between different conservative schools of thought. Buckley founded the National Review , a conservative magazine that published conservatives of all three persuasions. Its Mission Statement simultaneously appeals to the abandonment of “organic moral order,” the indispensability of a “competitive price system,” and the “satanic utopianism” of communism. 2 Both Meyer and Buckley thought that the primacy of the individual was an ideological belief through the line of traditionalism and libertarianism. Meyer wrote in What is Conservatism? that “the freedom of the person” should be “decisive concern of political action and political theory.” 3 Russell Kirk, a traditionalist-leaning conservative, similarly argued that the libertarian imperative of individual freedom is compatible with the “Christian conception of the individual as flawed in mind and will” because religious virtue “cannot be legislated,” meaning that freedom and virtue can be practiced and developed together. 4 The cultivation of the maximum amount of freedom that is compatible with traditional order thus became central to fusionist thought. Barry Goldwater, a senator from Arizona and the 1964 Republican nominee for president, championed the hybrid conservatism of Buckley and Meyer. Like Buckley in his Mission Statement, Goldwater’s acceptance speech at the Republican National Convention included a compound message in support of “a free and competitive economy,” “moral leadership” that “looks beyond material success for the inner meaning of [our] lives,” and the fight against communism as the “principal disturber of peace in the world.” 5 Goldwater also emphasized the fusionist freedom-order balance, contending that while the “single resolve” of the Republican party is freedom, “liberty lacking order” would become “the license of the mob and of the jungle.” 6 Having discussed the ideological underpinnings of conservative fusionism, I turn now to an analysis of how Reagan used humor as a tool for political framing. First, Reagan’s humor is distinctive for its accessibility: by this I mean that there are few barriers one must overcome to laugh at Reagan’s jokes. In his book Jokes: Philosophical Thoughts on Joking Matters , philosopher Ted Cohen calls jokes “conditional” if they presume that “their audiences [are] able to supply a requisite background, and exploit this background.” 7 The conditionality of a joke varies according to how much background it requires to be funny. In Cohen’s terms, Reagan’s jokes are not very conditional since many different audiences can appreciate their content. Cohen presents another distinction that is useful for analyzing Reagan’s humor: a joke is hermetic if the audience’s “background condition involves knowledge,” and it is affective if it “depends upon feelings … likes, dislikes and preferences” of the audience). Reagan’s jokes are not very conditional because they are at most hermetic, merely requiring some background knowledge to be appreciated— not a certain feeling or disposition— and that this makes his jokes funny even to people who disagree with him. There are two ways in which Reagan’s humor is accessible. The first is that many of his jokes have apolitical premises. By apolitical, I mean that the requisite knowledge required to make a joke funny does not directly relate to government or public affairs. For instance, Reagan said at the 1988 Republican National Convention, “I can still remember my first Republican Convention. Abraham Lincoln giving a speech that sent tingles down my spine.” To appreciate this joke, one only needs to know that Reagan is the oldest president to even hold office. This piece of knowledge does not pertain to the government in any direct way— in fact, this joke would remain funny even if it were told by a different person at a nonpolitical conference with a reference to a nonpolitical historical figure. Another example of Reagan’s apolitical humor is a joke he made in the summer of 1981: “I have left orders to be awakened at any time in case of national emergency, even if I'm in a cabinet meeting.” All one needs to understand here is that long meetings are often boring and sleep-inducing. One can even love long meetings and still find this joke funny because they understand the phenomenon of a boring, sleep-inducing meeting. Reagan made hundreds of these jokes during his time in office, all of which were, with few exceptions, funny to just about any listener. Their apolitical content ensured that no one political constituency would be unable to “get” Reagan’s jokes. The second way in which Reagan’s humor is hermetic is that his political jokes were playful and had relatively innocuous premises, meaning that one did not have to agree with their sentiment to laugh. Reagan’s political jokes can be differentiated from his apolitical jokes because they do require knowledge about government or public affairs in order to be funny. One such piece of knowledge is the inefficiency of government bureaucracy. For example, in his speech, “A Time for Choosing,” Reagan says that “the nearest thing to eternal life we will ever see on this Earth is a government program.” In another speech, Reagan quips, “I have wondered at times about what the Ten Commandments would have looked like if Moses had run them through the U.S. Congress.” The premises of these jokes, though political, are not very contentious. To find them funny one simply needs to know that bureaucracy can be inefficient, or even that there exists a sort of joke in which bureaucracies are teased for being inefficient; one does not need to hate bureaucracy or even want to reduce bureaucracy. Cohen might offer the following analogy to explain the conditionality of Reagan’s bureaucracy jokes: one does not need to think that Polish people are actually stupid to laugh at a Polish joke, one simply needs to understand that there exists a sort of joke in which Polish people are held to be stupid. Reagan’s inoffensive political jokes are playful, lighthearted, and careful not to alienate or antagonize the opposition by presuming a controversial belief. The accessibility of Reagan’s humor reflects the overall need for fusionism to appeal to a wide variety of conservative groups— traditionalists, libertarians and anti-communists. Instead of converting libertarians to traditionalism or vice versa, Nash writes that fusionists looked to foster agreement on “several fundamentals” of conservative thought. Reagan’s broadly accessible humor is both a concretization and a strategy for fusionism’s broadly accessible ideology. The strategic potency of Reagan’s humor lies in its ability to bond people together. Cohen writes that the “deep satisfaction in successful joke transactions is the sense held mutually by teller and hearer that they are joined in feeling.” Friedrich Nietzsche expresses a similar sentiment when he writes that “rejoicing in our joy, not suffering over our suffering, makes someone a friend.” This joint feeling brings people together even more than a shared belief since the moment of connection is more visceral and immediate. One might ask, however; is it not the case that all politicians value humor as a means to connect with their audience and unify their constituencies? Why is Reagan’s humor any different? While humor can be used for a broader range of political goals, politicians often connect with one group at the expense of another. For example, when asked what she would tell a male supporter who believed marriage was between one man and one woman, Senator Elizabeth Warren responded, “just marry one woman. I'm cool with that— assuming you can find one.” 9 Some democrats praised this joke for its dismissal of homophobic beliefs, but others felt that the joke was condescending and antagonistic. This is the sort of divisive joke that Reagan was uninterested in— one that pleases one of his constituencies at the expense of another. Reagan would also avoid much of Donald Trump’s humor. For instance, Trump wrote in 2016, “I refuse to call Megyn Kelly a bimbo, because that would not be politically correct. Instead I will only call her a lightweight reporter!” Trump’s dismissal of “political correctness” is liberating to some but offensive to others. By contrast, Reagan’s exoteric style of humor welcomes all the constituencies of conservative fusion. Nash writes that fusionists were “tired of factional feuding,” and thus Reagan had no motivation to drive a larger wedge between traditionalists and libertarians. 1 The second thing to note about Reagan’s humor is its empowering tone. This takes two forms. First, Reagan elevates his audience by implying that if they controlled the government, they could do a far better job, a message which presumes and therefore posits their competence. For instance, in “A Time For Choosing,” Reagan argues that one complicated anti-poverty program could be made more effective by simply sending cash directly to families. In doing so, Reagan suggests that if any given audience member were in charge of the program, they could do a better job than the bureaucrats. Second, Reagan’s insistence on limited government affirms the average citizen’s capacity for self-government. Reagan famously states that “the nine most terrifying words in the English language are, ‘I’m from the government and I’m here to help.’” Since this implies that government aid will leave you worse off, it also posits the average citizen’s capacity for autonomy and therefore their maturity, level-headedness, and overall competence. The empowering tone of Reagan’s humor reflects fusionism’s emphasis on individual freedom and independence. Meyer writes that “the desecration of the image of man, the attack alike upon his freedom and his transcendent dignity, provide common cause” for both traditionalists and libertarians against liberals. Yet, a presupposition of a belief in freedom is a belief in people’s faculty to be free, to not squander their freedom on pointless endeavors or let their freedom collapse into chaos. This freedom-order balance is fundamental to fusionism as an ideology that straddles support from libertarians who want as little government intervention as possible with traditionalists who want the state to maintain certain societal values. By positing the competence of the free individual in his jokes, Reagan affirms Russell Kirk’s idea that moral order will arise organically from individual freedom, not government coercion. In this paper, I argue that one of Reagan’s marks on the development of conservative thought was his careful use of humor to reflect certain ideological and practical commitments of post-war fusionism. By making his jokes accessible to the varying schools of conservatism and propounding the capacity of the individual for self-government, Reagan’s humor functioned as both a manifestation and a strategy for fusionism’s post-war triumph. References “A Selected Quote From: The President’s News Conference, August 12, 1986.” August 12, 1986 Reagan Quotes and Speeches. Ronald Reagan Presidential Foundation & Institute. Accessed August 6, 2022. https://www.reaganfoundation.org/ronald-reagan/reagan-quotes-speeches/news-conference-1/ . Buckley Jr., William F. "Our Mission Statement." National Review 19 (1955). Campbell, Colin. 2016. “Donald Trump Announces to the World That He Won’t Call Megyn Kelly a ‘Bimbo.’” Insider . January 27, 2016. https://www.businessinsider.com/donald-trump-fox-news-debate-megyn-kelly-bimbo-2016-1 . Cohen, Ted. Jokes: Philosophical Thoughts on Joking Matters . Chicago: University of Chicago Press, 1999. “‘George - Make It One More for the Gipper.’” The Independent. August 16, 1998. https://www.independent.co.uk/arts-entertainment/george-make-it-one-more-for-the-gipper-1172284.html . “Goldwater’s 1964 Acceptance Speech.” Washington Post. Last Modified 1998. https://www.washingtonpost.com/wp-srv/politics/daily/may98/goldwaterspeech.htm . Harris, Daniel I. "Friendship as Shared Joy in Nietzsche." Symposium 19, no. 1, (2015): 199-221. Meyer, Frank S., ed. What is Conservatism? Intercollegiate Studies Institute, 2015. Open Road Media. Nash, George H. The Conservative Intellectual Movement in America Since 1945 . Intercollegiate Studies Institute, 2014. Open Road Media. Panetta, Grace. 2019. “Elizabeth Warren Brings Down the House at CNN LGBT Town Hall With a Fiery Answer on Same-Sex Marriage.” Insider . October 11, 2019. https://www.businessinsider.com/elizabeth-warren-brings-down-house-cnn-lgbt-town-hall-video-2019-10 . Reagan, Ronald. “A Time for Choosing.” Transcript of speech delivered in Los Angeles, CA, October 27, 1964. https://www.reaganlibrary.gov/reagans/ronald-reagan/time-choosing-speech-october-27-1964#:~:text=%22The%20Speech%22%20is%20what%20Ronald,his%20acting%20career%20closed%20out . Sherrin, Ned, ed. Oxford Dictionary of Humorous Quotations . 4th ed. Oxford: Oxford University Press, 2008. Wilson, John. Talking With the President: The Pragmatics of Presidential Language . Oxford: Oxford University Press, 2015.

  • Adithya V. Raajkumar

    Adithya V. Raajkumar “Victorian Holocausts”: The Long-Term Consequences of Famine in British India Adithya V. Raajkumar Abstract: This paper seeks to examine whether famines occur- ring during the colonial period affect development outcomes in the present day. We compute district level measures of economic development, social mobility, and infrastructure using cross-sectional satellite luminosity, census data, and household survey data. We then use a panel of recorded famine severity and rain- fall data in colonial Indian districts to construct cross-sectional counts measures of famine occurrence. Finally, we regress modern day outcomes on the number of famines suffered by a district in the colonial era, with and without various controls. We then instrument for famine occurrence with climate data in the form of negative rainfall shocks to ensure exogeneity. We find that districts which suffered more famines during the colonial era have higher levels of economic development; however, high rates of famine occurrence are also associated with a larger percentage of the labor force working in agriculture, lower rural consumption, and higher rates of income inequality. We attempt to explain these findings by showing that famine occurrence is simultaneously related to urbanization rates and agricultural development. Overall, this suggests that the long-run effects of natural disasters which primarily afflict people and not infrastructure are not al- ways straightforward to predict. 1. Introduction What are the impacts of short-term natural disasters in the long-run, and how do they affect economic development? Are these impacts different in the case of disasters which harm people but do not affect physical infrastructure? While there is ample theoretical and empirical literature on the impact of devastating natural disasters such as hurricanes and earthquakes, there are relatively few studies on the long-term consequences of short-term disasters such as famines. Further- more, none of the literature focuses on society-wide development outcomes. The case of colonial India provides a well-recorded setting to examine such a question, with an unfortunate history of dozens of famines throughout the British Raj. Many regions were struck multiple times during this period, to the extent that historian Mike Davis characterizes them as “Victorian Holocausts” (Davis 2001 p.9). While the short-term impacts of famines are indisputable, their long-term effects on economic development, perhaps through human development patterns, are less widely understood. The United Kingdom formally ruled India from 1857 to 1947, following an ear- lier period of indirect rule by the East India Company. The high tax rate imposed on peasants in rural and agricultural India was a principal characteristic of British governance. Appointed intermediaries, such as the landowning zamindar caste in Bengal, served to collect these taxes. Land taxes imposed on farmers often ranged from two-thirds to half of their produce, but could be as high as ninety to ninety-five percent. Many of the intermediaries coerced their tenants into farming only cash crops instead of a mix of cash crops and agricultural crops (Dutt 2001). Aside from high taxation, a laissez-faire attitude to drought relief was another principal characteristic of British agricultural policy in India. Most senior officials in the imperial administration believed that serious relief efforts would cause more harm than they would do good and consequently, were reluctant to dispatch aid to afflicted areas (ibid). The consequences of these two policies were some of the most severe and frequent famines in recorded history, such as the Great Indian Famine of 1893, during which an estimated 5.5 to 10.3 million peasants perished from starvation alone, and over 60 million are believed to have suffered hardship (Fieldhouse 1996). Our paper focuses on three sets of outcomes in order to assess the long-term impact of famines. First, we measure macroeconomic measures of overall development, such as rural consumption per capita and the composition of the labor force. We also use nighttime luminosity gathered from satellite data as a proxy for GDP, of which measurement using survey data can be unreliable. Second, we look at measures of human development: inequality, social mobility, and education, constructed from the India Human Development Survey I and II. Finally, we examine infrastructure, computing effects on village-level electrification, numbers of medical centers, and bus service availability. To examine impacts, we regress famine occurrence on these outcomes via ordinary least-squares (OLS). We use an instrumental-variables (IV) approach to ensure a causal interpretation via as-good-as-random assignment (1). We first estimate famine occurrence, the endogenous independent variable, as a function of rainfall shocks–a plausibly exogenous instrument–before regressing outcomes on predicted famine occurrence via two-stage least-squares (2SLS). Since the survey data are comparatively limited, we transform and aggregate panel data on rainfall and famines as counts in order to use them in a cross-section with the contemporary outcomes. We find for many outcomes that there is indeed a marginal effect of famines in the long-run, although where it is significant it is often quite small. Where famines do have a significant impact on contemporary outcomes, the results follow an interesting pattern : a higher rate of famine occurrence in a given district is associated with greater economic development yet worse rural outcomes and higher inequality. Specifically, famine occurrence has a small but positive impact on nighttime luminosity–our proxy for economic development–and smaller, negative impacts on rural consumption and the proportion of adults with a college education. At the same time, famine occurrence is also associated with a higher proportion of the labor force being employed in the agricultural sector as well as a higher level of inequality as measured by the Gini index (2). Moreover, we find limited evidence that famine occurrence has a slightly negative impact on infrastructure as more famines are associated with reduced access to medical care and bus service. We do not find that famines have any significant impact on social mobility–specifically, intergenerational income mobility–or infrastructure such as electrification in districts. This finding contradicts much of the established literature on natural disasters, which has predominantly found large and wholly negative effects. We at- tempt to explain this disparity by analyzing the impact of famines on urbanization rates to show that famine occurrence may lead to a worsening urban-rural gap in long-run economic development. Thus, we make an important contribution to the existing literature and challenge past research with one of our key findings: short-term natural disasters which do not destroy physical infrastructure may have unexpectedly positive outcomes in the long-run. While the instrumental estimates are guaranteed to be free of omitted variable bias, the OLS standard errors allow for more precise judgments due to smaller confidence intervals. In around half of our specifications, the Hausman test for endogeneity fails to reject the null hypothesis of exogeneity, indicating that the ordinary-least squares and instrumental variables results are equally valid (3). How- ever, the instrumental variables estimate helps address other problems, such as attenuation bias, due to possible measurement error (4). Section 2 presents a review of the literature and builds a theoretical framework for understanding the impacts of famines on modern-day outcomes. Section 3 describes our data, variable construction, and summary statistics. Sections 4 and 5 present our results using ordinary least-squares and instrumental two-stage least- squares approaches. Section 6 discusses and attempts to explain these results. 2. Review and Theoretical Framework 1. The Impact of Natural Disasters Most of the current literature on natural disasters as a whole pertains to physical destructive phenomena such as severe weather or seismic events. Moreover, most empirical studies, such as Nguyen et al (2020) and Sharma and Kolthoff (2020) , focus on short-run aspects of natural disasters relating to various facets of proxi- mate causes (Huff 2020) or pathways of short-term recovery (Sharma and Kolthoff 2020). Famines are a unique kind of natural disaster in that they greatly affect crops, people, and animals but leave physical infrastructure and habitation relatively unaffected. We attempt to take this element of famines into account when explaining our results. Of the portion of the literature that focuses on famines, most results center on individual biological outcomes such as height, nutrition, (Cheng and Hui Shui 2019) or disease (Hu et al. 2017. A percentage of the remaining studies fixate on long-term socioeconomic effects at the individual level (Thompson et al. 2019). The handful of papers that do analyze broad long-term socioeconomic outcomes, such as Ambrus et al. (2015) and Cole et al. (2019), all deal with either long-term consequences of a single, especially severe natural disaster or the path dependency effects that may occur because of the particular historical circumstances of when a disaster occurs, such as in Dell (2013). On the other hand, our analysis spans several occurrences of the same type of phenomenon in a single, relatively stable sociohistorical setting, thereby utilizing a much larger and more reliable sample of natural disasters. Thus, our paper is the first to examine the long-term effects of a very specific type of natural disaster, famine, on the overall development of an entire region, by considering multiple occurrences thereof. Prior econometric literature on India’s famine era has highlighted other areas of focus, such as Burgess and Donaldson (2012), which shows that trade openness helped mitigate the catastrophic effects of famine. There is also plenty of historical literature on the causes and consequences of the famines, most notable in academic analyses from British historians (contemporarily, Carlyle 1900 and Ewing 1919; more recently Fieldhouse), which tend to focus on administrative measures, or more specifically, the lack thereof. In terms of the actual effects of famine, all of the established literature asserts that natural disasters overwhelmingly influence economic growth through two main channels: destruction of infrastructure and resulting loss of human capital (Lima and Barbosa 2019, Nguyen et al. 2020, Cole et al. 2019), or sociopolitical historical consequences, such as armed conflict (Dell 2013, Huff 2019). Famines pose an interesting question in this regard since they tend to result in severe loss of human capital through population loss due to starvation but generally result in smaller-scale infrastructure losses (Agbor and Price 2013). This is especially the case for rural India, which suffered acute famines while having little infrastructure in place (Roy 2006). We examine three types of potential outcomes: overall economic development, social mobility, and infrastructure, as outlined in section three. Our results present a novel finding in that famine occurrence seems to positively impact certain outcomes while negatively impacting most others, which we attempt to explain by considering the impact of famines on urbanization rates. Famines can impact outcomes through various mechanisms; therefore, we leave the exact causal mechanism unspecified and instead treat famines as generic shocks with subsequent recovery of unknown speed. If famines strike repeatedly, their initial small long-term effects on outcomes can escalate. In order to distinguish long- run effects of famines, we construct a simple growth model where flow variables such as growth quickly return to the long-run average after the shock, but stock variables such as GDP or consumption only return to the average asymptotically. Our intuition for the basis of distinguishing a long-run effect of famines rests on a simple growth model in which flow variables such as growth quickly return to the long-run average after a shock, but stock variables such as GDP or consumption only return to the average asymptotically (5). Thus, over finite timespans, the differences in stock variables between districts that undergo famines and those that do not should be measurable even after multiple decades. As mentioned below, this is in line with more recent macroeconomic models of natural disasters such Hochrainer (2009) and Bakkensen and Barrage (2018). Assume colonial districts (indexed by i ) suffer n i famines over the time period (in our data, the years 1870 to 1930), approximated as average constant rates f i . The occurrence of famine can then be modeled by a Poisson process with interval parameter f i , which represents the expected time between famines–even though the exact time is random and thus unknown–until it is realized (6). For simplicity, we assume that famines cause damage d to a district’s economy, for which time r i is needed to recover to its assumed long-run, balanced growth path (7). We make no assumptions on the distributions of d and r i except that r i is dependent on d and that the average recovery time E[ r i] is similarly a function of E[ d ]. If the district had continued on the growth path directly without the famine, absent any confounding effects, it would counterfactually have more positive out- comes today by a factor dependent on niE[tf] and thus n i , the number of famines suffered. We cannot observe the counterfactuals (the outcome in the affected district had it not experienced a famine), so instead, we use the unaffected districts in the sample as our comparison group. Controlling for factors such as population and existing infrastructure, each district should provide a reasonably plausible counterfactual for the other districts in terms of the number of famines suffered. Then, the differences in outcomes among districts measured today, y i , can be modeled as a function of the differences in the number of famines, n i . Finally, across the entire set of districts, this can be used to represent the average outcome E[y i ] as a function of the number of famines, which forms the basis of our ordinary-least squares approach in section four. This assumes that the correlation between famine occurrence and outcome is equal to 0. To account for the possibility their correlation is non-zero, we also use rainfall shocks to isolate the randomized part of our independent variable in order to ensure that famine occurrence is uncorrelated with our outcome variables. The use of rainfall shocks, in turn, forms the basis of our instrumental variables approach in section five. The important question is the nature of the relationship between d and ri . While f can be easily inferred from our data, d and especially r are much more difficult to estimate without detailed, high-level, and accurate data. Since the historical record is insufficiently detailed to allow precise estimation of the parameters of such a model, we constrain the effects of famine to be linear in our estimation in sections four and five. 2. Estimation Having constrained the hypothesized effects of famine to be linear, in section four, we would prefer to estimate (1) below, where represents our estimate of the effect of famine severity ( famine i), measured as the number of famines undergone by the district, on the outcome variable outcome yi, and Xi is a vector of contem- porary (present-day) covariates, such as mean elevation and soil quality. The con- stant term captures the mean outcomes across all districts andis a district-specific error term. Much of the research on famine occurrence in colonial India attributes the occurrence of famines and their consequences to poor policies and administration by the British Raj. If this is the case, and these same policies hurt the development of districts in other ways, such as by stunting industrialization directly, then the estimation of (1) will not show the correct effect of famines per se on comparative economic development. Additionally, our observations of famines, which are taken indirectly from district-level colonial gazetteers and reports, may be subject to “measurement” error that is non-random. For example, the reporting of famines in such gazetteers may be more accurate in well-developed districts that received preferential treatment from British administrators. To solve this problem, we turn to the examples of Dell et al. (2012), Dell (2013), Hoyle (2010), and Donaldson and Burgess (2012), who use weather shocks as instruments for natural disaster severity. While Dell (2013) focuses on historical consequences arising from path dependency and Hoyle (2010) centralizes on productivity, the instrumental methodology itself is perfectly applicable to our work. Another contribution of our pa- per is to further the use of climate shocks as instruments. We expand upon the usage of climate shocks as instruments because they fit the two main criteria for an instrumental variable. Primarily, weather shocks are extremely short-term phenomena, so their occurrence is unlikely to be correlated with longer-term climate factors that may impact both historical and modern outcomes. Secondly, they are reasonably random and provide exogenous variation with which we can estimate the impact of famines in an unbiased manner. We first estimate equation (2) below before estimating (1) using the predicted occurrence of famine from (2): We calculate famine as the number of reported events occurring in our panel for a district and rainfall as the number of years in which the deviation of rainfall from the mean falls below a certain threshold, nominally the fifteenth and tenth percentiles of all rainfall deviations for that district. As in (1), there is a constant term and error term. As is standard practice, we include the control variables in the first-stage even though they are quite plausibly unrelated to the rainfall variable. This allows us to estimate the impacts of famine with a reasonably causal interpretation; since the assignment of climate shocks is ostensibly random, using them to “proxy” for famines in this manner is akin to “as good as random” estimation. The only issue with this first-stage specification is that while we instrument counts of famine with counts of lo w rainfall years, the specific years in which low rainfall occurs theoretically need not match up with years in which famine is recorded in a given district. Therefore, we would prefer to estimate (3) below instead, since it provides additional identification through a panel dataset. Any other climate factors should be demeaned out by the time effects. Other district characteristics that may influence agricultural productivity and therefore famine severity, such as soil quality, should be differenced out with district effects, represented by the parameters. Differences in administrative policy should be resolved with provincial fixed effects. Unfortunately, we would then be unable to implement the standard instrumental variables practice of including the control variables in both stages since our modern-day outcomes are cross-sectional (i.e, we only have one observation per district for those measures). Nevertheless, our specification in (2) should reason- ably provide randomness that is unrelated to long-term climate factors, as mentioned above. Finally, we collapse the panel by counting the number of famines that occur in the district over time in order to compare famine severity with our cross-sectional modern-day outcomes and to get an exogenous count measure of famine that we can use de novo in (1). To account for sampling variance in our modern-day estimates, we use error weights constructed from the current population of each district meaning that our approach in section 5 is technically weighted least-squares, not ordinary. While this should account for heteroscedasticity in the modern observations, we use robust SM estimators in our estimations (McKean 2004, Barrera and Yohai 2006) to assure that our standard errors on the historical famine and rainfall variables are correct (8). The results of these approaches are detailed in section six. 3. Data 1. Sources and Description Our principal data of interest is a historical panel compiled from a series of colonial district gazetteers by Srivastava (1968) and details famine severity at the district level over time in British India from 1870 to 1930. Donaldson and Burgess (2010) then code these into an ordinal scale by using the following methodology: 4 – District mentioned in Srivastava’s records as “intensely affected by famine” 3 – District mentioned as “severely affected” 2 – Mentioned as “affected” 1 – Mentioned as “lightly affected” 0 – Not mentioned 9 – Specifically mentioned as being affected by spillover effects from a neighbor- ing district (there are only four such observations, so we exclude them) In our own coding of the data, we categorize famines as codes 2, 3, and 4, with severe famines corresponding to codes 3 and 4. We compute further cross-sectional measures, chiefly the total number and proportion of famine-years that a district experienced over the sixty-year periods. This is equivalent to tabulating the frequency of code occurrences and adding the resulting totals for codes 2 to 4 to obtain a single count measure of famine. Our results are robust to using “severe” (codes 3 and 4) famines instead of codes 2, 3, and 4. Across the entire panel, codes from 0 to 4 occurred with the following frequencies: 4256, 35, 207, 542, and 45 respectively. We also supplemented this panel with panel data on rainfall over the same time period. Several thousand measuring stations across India collected daily rainfall data over the time period, which Donaldson (2012) annualizes and compares with crop data. The rainfall data in Donaldson (2012) represents the total rainfall in a given district over a year, categorized by growing seasons of various crops (for ex- ample, the amount of total rainfall in a district that fell during the wheat growing season). Since different districts likely had different shares of crops, we average over all crops to obtain an approximation of total rainfall over the entire year. We additionally convert this into a more relevant measure in the context of famine by considering only the rainfall that fell during the growing seasons of crops typically grown for consumption in the dataset; those being bajra, barley, gram (bengal), jowar (sorghum), maize, ragi (millet), rice, and wheat. Finally, to ensure additional precision over the growing season, we simply add rainfall totals during the grow- ing seasons of the two most important food crops - rice and wheat - which make up over eighty percent of food crops in the country (World Bank, UN-FAOSTAT). The two crops have nearly opposite growing seasons, so the distribution of rainfall over the combined growing seasons serves as an approximation of total annual rainfall. Our results are robust with regards to all three definitions; the pairwise correlations between the measures are never less than ninety percent. Moreover, the cross-sectional famine instruments constructed from these are almost totally identical as the patterns in each type of rainfall (that is, their statistical distributions over time) turn out to be the same. As expected, there appears to be significant variation in annual rainfall. The ex- ample of the Buldana district (historically located in the Bombay presidency, now in Maharashtra state) highlights this trend, as shown in Figure 1 on the following page. In general, the trends for both measures of rainfall over time are virtually in- distinguishable aside from magnitude. As anticipated, famine years are marked by severe and/or sustained periods of below-average rainfall although the correlation is not perfect. There are a few districts which have years with low rainfall and no recorded famines, but this can mostly be explained by a lack of sufficient records, especially in earlier years. On the opposite end of the spectrum, there are a few districts that recorded famines despite above-average rainfall, which could possibly be the result of non-climatic factors such as colonial taxation policies, conflicts, or other natural disasters, such as insect plagues. However, the relationship between rainfall patterns and famine occurrence suggests that we can use the former as an instrument for the latter especially since the correlation is not perfect, and famine occurrence is plausibly non-random due to the impact of British land ownership policies. Figure 1: Rainfall over time for Buldana from 1870 to 1920 Notes : The dashed line shows mean rainfall for all food crops; the solid line shows the total rainfall over the wheat and rice growing seasons. The blue and purple lines represent the historical means for these measures of rainfall. The rad shading denotes years in which famines are recorded as having affected the district. We construct count instruments for famines by first computing the historic mean and annual deviation for rainfall in each district. We can then count famines as years in which the deviation was in the bottom fifteenth percentile in order to capture relatively severe and negative rainfall shocks as plausible famine causes. For severe famines, we use the bottom decile instead. The percentiles were chosen based on famine severity so that the counts obtained using this definition were as similar as possible to the actual counts constructed from recorded famines (see above) in the panel dataset. For modern-day outcomes, we turn to survey data from the Indian census as well as the Indian Human Development Survey II, which details personal variables (ex. consumption and education), infrastructure measures (such as access to roads), and access to public goods (ex. hospital availability) at a very high level of geographical detail. An important metric constructed from the household development surveys is that of intergenerational mobility as measured by the expected income percentile of children whose parents belonged to a given income percentile, which we obtain from Novosad et al. (2019). Additionally, as survey data can often be unreliable, we supplement these with an analysis of satellite luminosity data, which provides measures of the (nighttime) luminosity of geographic cells, which should serve as a more reliable proxy for economic development, following Henderson et al. (2011) and Pinkovsky and Sala-i-Martin (2016). These data are mostly obtained from Novosad et. al (2018, 2019) and Iyer (2010), which we have aggregated to the district level. The outcomes variables are as follows: 1. Log absolute magnitude per capita. We intend this to serve as a proxy for a district’s economic development in lieu of reliable GDP data. This is the logarithm of the total luminosity observed in the district divided by the district’s population. These are taken from Vernon and Storeygard (2011) by way of Novosad et al. (2018). 2. Log rural consumption per capita. This is taken from the Indian Household Survey II by way of Novosad et al. (2019). 3. Share of the workforce employed in the cultivation sector, intended as a mea- sure of rural development and reliance on agriculture (especially subsistence agri- culture). This is taken from Iyer et al. (2010). 4. Gini Index, from Iyer (2010), as a measure of inequality. 5. Intergenerational income mobility (father-son pairs), taken from Novosad et al. (2018). Specifically, we consider the expected income percentile of sons in 2012 whose fathers were located in the 25th percentile for household income (2004), using the upper bound for robustness (9). 6. The percentage of the population with a college degree, taken from census data. 7. Electrification, i.e. the percent of villages with all homes connected to the power grid (even if power is not available twenty-four hours per day). 8. Percent of villages with access to a medical center, taken from Iyer (2010), as a measure of rural development in the aspect of public goods. 9. Percent of villages with any bus service, further intended as a measurement of public goods provision and infrastructure development. Broadly speaking, these can be classified into three categories with 1-3 representing broad measures of economic development, 4-6 representing inequality and human capital, and 7-9 representing the development of infrastructure and the provision of public goods. As discussed in section two, our preliminary hypothesis is that the occurrence of famines has a negative effect on district development, which is consistent with most of the literature on disasters. Hence, given a higher occurrence of famine, we expect that districts suffering from more famines during the colonial period will be characterized by lower levels of development, being (1) less luminous at night, (2) poorer in terms of a lower rural consumption, and (3) more agricultural, i.e have a higher share of the labor force working in agriculture. Similarly, with regards to inequality and human capital, we expect that more famine-afflicted districts will have (4) higher inequality in terms of a higher Gini index, (5) lower upward social mobility in terms of a lower expected income percentile for sons whose fathers were at the 25th income percentile, and (6) a lower percentage of adults with a college education. Finally, by the same logic, these districts should be relatively underdeveloped in terms of infrastructure, and thus (7) lack access to power, (8) lack access to medical care, and (9) lack access to transportation services. Finally, even though our independent variable when instrumented should be exogenous, we attempt to control for geographic and climatic factors affecting agriculture and rainfall in each district, namely: - Soil type and quality (sandy, rocky or barren, etc.) - Latitude (degree) and mean temperature (degrees Celsius) - Coastal location (coded as a dummy variable) - Area in square kilometers (it should be noted that district boundaries correspond well, but not perfectly, to their colonial-era counterparts) As mentioned previously, research by Iyer and Banerjee (2008, 2014) suggests that the type of land-tenure system implemented during British rule has had a huge impact on development in the districts (10). We also argue that it may be re- lated to famine occurrence directly (for example, in that tenure systems favoring landlords may experience worse famines), in light of the emerging literature on agricultural land rights, development, and food security (Holden and Ghebru 2016, Maxwell and Wiebe 1998). Specifically, we consider specifications with and without the proportion of villages in the district favoring a landlord or non-land- lord tenure system, obtained from Iyer (2010). In fact, the correlation between the two variables in our dataset is slightly above 0.23, which is not extremely high but enough to be of concern in terms of avoiding omitted variable bias. We ultimately consider four specifications for each dependent variable based on the controls in X from equation (1): no controls, land tenure, geography, and land tenure with geography. Each of these sets of controls addresses a different source of omitted variable bias: the first, land-tenure, addresses the possibility of British land-tenure policies causing both famines and long-term development outcomes. The second, geography, addresses the possibility of factors such as mean elevation and temperature impacting crop growth while also influencing long-term development (for example, if hilly and rocky districts suffer from more famines because they are harder to grow crops in but also suffer from lower development because they are harder to build infrastructure in or access via transportation). We avoid using contemporary controls for the outcome variables (that is, including infrastructure variables, income per capita, or welfare variables in the right- hand side) because many of these could reasonably be the result of the historical effects (the impact of famines) we seek to study. As such, including them as controls would artificially dilate the impact of our independent variable. 2. Summary statistics Table I presents summary statistics of our cross-sectional dataset on the follow- ing page. One cause for potential concern is that out of the over 400 districts in colonial India, we have only managed to capture 179 in our sample. This is due chiefly to a paucity of data regarding rainfall; there are only 191 districts captured in the original rainfall data from Donaldson (2012). In addition, the changing of district names and boundaries over time makes the matching of old colonial districts with modern-day administrative subdivisions more imprecise than we would like. Nevertheless, these districts cover a reasonable portion of modern India as well as most of the regions which underwent famines during imperial rule. The small number of districts may also pose a problem in terms of the standard errors on our coefficients, as the magnitude of the impacts of famines that occurred over a hundred years ago on outcomes today is likely to be quite small. Table 1 – Summary Statistics Source : Author calculations, from Iyer (2010), Iyer and Bannerjee (2014), Novosad et. al (2018), Asher and No- vosad (2019), Donaldson and Burgess (2012). 4. Ordinary Least Squares Although we suspect that estimates of famine occurrence and severity based on recorded historical observations may be nonrandom for several reasons (mentioned in section two and three), we first consider direct estimation of (1) from section two. For convenience, equation (1) is reprinted below: As in the previous section, famine refers to the number of years that are coded 2, 3, or 4 in famine severity as described in Srivastava (1968). X is the set of con- temporary covariates, also described in section three. We estimate four separate specifications of (1) where X varies: 1. No controls, i.e. X is empty. 2. Historical land tenure, to capture any effects related to British land policy in causing both famines and long-term developmental outcomes. 3. Geographical controls relating to climatic and terrestrial factors, such as temperature, latitude, soil quality, etc. 4. Both (2) and (3). Table II presents the estimates for the coefficients on famines and tenure for our nine dependent variables on the following page (we omit coefficients and confidence intervals for the geographic variables for reasons of brevity and relevance in terms of interpretation). In general, the inclusion or exclusion of controls does not greatly change the magnitudes of the estimates nor their significance, except for a few cases. We discuss effects for each dependent variable below: Log of total absolute magnitude in the district per capita : The values for famine suggest that interestingly, each additional famine results in anywhere from 1.8 to 3.6 percent more total nighttime luminosity per person in the district. As mentioned in section three, newer literature shows that nighttime luminosity is a far more reliable gauge of development than reported survey measures such as GDP, so this result is not likely due to measurement error. Thus, as the coefficient on famine is positive, it seems that having suffered more famines is positively related to development. This in fact is confirmed by the instrumental variables (IV) estimates in Table III (see section five). Curiously, the inclusion of tenure and geography controls separately does not change the significance, but including both of them together in the covariates generates far larger confidence intervals than expected and reduces the magnitude of the effect by an entire order of magnitude. This may be because each set of controls tackles a different source of omitted variable bias. As expected, however, land tenure plays a significant role in predicting a district’s development; even a single percent increase in the share of villages with a tenant-favorable system is associated with a whopping 73-80% additional night- time luminosity per person. Log rural consumption per capita : We find evidence that additional famines are associated with lower rural consumption, albeit on a minuscule scale. This suggests that the beneficial effect of famines on development may not be equal across urban and rural areas but instead concentrated in cities. For example, there might be a causal pathway that implies faster urbanization in districts that undergo more famines. Unlike with luminosity, historical land tenure does not seem to play a role in rural consumption. Percent of the workforce employed in cultivation : As expected, additional famines seem to play a strongly significant but small role with regards to the labor patterns in the district. Districts with more famines seem to have nearly one percent of the labor force working in cultivation for each additional famine, suggesting famines may inhibit development of industries other than agriculture and cultivation. Our instrumental variables estimates confirm this. Puzzlingly, land tenure does not seem to be related to this very much at all. Gini Index : The coefficients for the number of famines seem to be difficult to interpret as both those for the specification with no controls and with both sets of controls are statistically significant with similar magnitudes yet opposite signs. The confidence interval for the latter is slightly narrower. This is probably because the true estimate is zero or extremely close to zero, and the inclusion or exclusion of controls is enough to narrowly affect the magnitude to as to flip the sign of the co- efficient. In order to clarify this, more data is needed – i.e for more of the districts in colonial India to be matched in our original sample. At the very least, we can say that land tenure clearly has a large and significant positive association with in- equality. Unfortunately, this association cannot be confirmed as causal due to the lack of an instrument for land tenure which covers enough districts of British India. However, as Iyer and Banerjee (2014) argue, the assignment of tenure systems itself was plausibly random (having been largely implemented on the whims of British administrators) so that one could potentially interpret the results as causal with some level of caution. Intergenerational income mobility : Similarly, we do not find evidence of an association between the number of famines suffered by a district in the colonial era and social mobility in the present day, but we do find a strong impact of land tenure, which makes sense to the reported institutional benefits of tenant-favorable systems in encouraging development as well as the obvious benefits for the tenants and their descendants themselves. Each one-percent increase in the share of villages in a district that uses a tenant-favorable system in the colonial era is associated with anywhere from ten to thirteen percent higher expected income percentile for sons whose fathers were at the 25th percentile in 1989 although the estimates presented in Table II are an upper bound. College education : We find extremely limited evidence that famines in the colonial period are associated with less human capital in the present day, with a near-zero effect of additional famines on the share of adults in a district with a college degree (in fact, rounded to zero with five to six decimal places). Land tenure similarly has very little or no effect. Electrification, access to medical care , bus service : All three of these infra- structure and public goods variables show a negligible effect of famines, but strong impacts of historical land tenure. Ultimately, we find that famines themselves seem to have some positive impact on long-term development despite also being associated with many negative out- comes, such as a greater share of the workforce employed in agriculture (i.e as opposed to more developed activities such as manufacturing or service). Another finding of note is that while famines do not seem to have strong associations with all of our measures, land tenure does. This suggests that the relationship between land-tenure and famine is worth looking into. The existence of bias in the recording of famines, as well as the potential for factors that both cause famines while simultaneously affecting long-term outcomes, present a possible problem with these estimates. We have already attempted to account for one of those, namely historical land tenure systems. Indeed, in most of the specifications, including tenure in the regression induces a decrease in the magnitude of the coefficient on famine. As the effect of famine tends to be extremely small to begin with, the relationship is not always clear. Other errors are also possible. For example, it is possible that a given district experienced a famine in a given year, but insufficient records of its occurrence remained by 1968. Then, Srivastiva (1968) would have assigned that district a code of 0 for that year, but the correct code should have been higher. Indeed, as described in section three, a code of 0 corresponds to a code of “not mentioned”, which encompasses both “not mentioned at all” and “not mentioned as being affected by famine” (Donaldson and Burgess 2010). While measurement error in the dependent variable is usually not a problem, error in the independent variable can lead to attenuation bias in the coefficients since the ordinary least-squares algorithm minimizes the error on the dependent variable by estimating coefficients for the independent variables. The greater this error, the more the ordinary least- squares method will bias the estimated coefficients towards zero in an attempt to minimize error in the dependent variable (Riggs et al. 1978). For these reasons, we turn to instrumental variables estimation in section five in an attempt to provide additional identification. Table 2 – Ordinary Least-Square Estimates Notes : Independent variable is number of with recorded famines (famine code of 2 or above). Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). 5. Weather Shocks as an Instrument for Famine Severity As explained in section two, there are many possible reasons why recorded famine data may not be exogenous. In any case, it would be desirable to have a truly exogenous measure of famine, for which we turn to climate data in the form of rainfall shocks. Rainfall is plausibly connected to the occurrence of famines, especially in light of the colonial government’s laissez-faire approach to famine relief (Bhatia 1968). For example, across all districts, mean rainfall averaged around 1.31m in years without any famine and around 1.04m in districts which were at least somewhat affected by famine (code 1 or above). Figure 2 below shows that there is a very clear association between rainfall activity and famines in colonial India, although variability in climate data as well as famine and agricultural policy means that there are some high-rainfall districts which do experience famines as well as low-rainfall districts which do not experience as many famines, as noted in section three. Figure 2: Associations between famine occurrence and rainfall trends It should be clear from the first three scatterplots above that there is a negative relationship between the amount of rainfall a district receives and the general prevalence of famine but more importantly, the total size of the rainfall shocks and the total occurrences of famine in that district. From the final plot we see that when we classify low-rainfall years by ranking the deviations from the mean, counting the number of years in which these deviations are in the bottom fifteenth percentile corresponds well to the actual number of recorded famines for each district. In order to use this to measure famine exogenously, we first estimate (2) (see below, section two and section three) where we predict the number of famines from the number of negative rainfall shocks as represented by deviation from the mean in the bottom fifteen percent of all deviations before estimating (1) using this predicted estimate of famine in place of the recorded values. Our reduced form11 estimates, where we first run (1) using the number of negative rainfall shocks directly, are presented on the following pages in Table III (11). The reduced form equation is shown as (4) below as well: Table 3 – Reduced form estimates for IV Notes : Independent variable is number of years in which deviation of rainfall from the historic mean is in the bottom fifteenth-percentile. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). From Table III, it would appear that negative rainfall shocks have similar effects on the outcome variables as do recorded famines in terms of the statistical significance of the coefficients on the independent variable. There is also the added benefit that we can confirm our very small and slightly negative effects of famines on the proportion of adults with a college education: for each additional year of exceptionally low rainfall in a district, the number of adults with a college education in 2011 decreases by 0.1%. In addition, whereas the coefficients in Table II were conflicting, Table III provides evidence in favor of the view that additional famines increase inequality in a district as measured by the Gini index. However, the magnitudes of the effects of famines or low-rainfall years are pre- dominantly larger than their counterparts in Table II to a rather puzzling extent. While we stated earlier in section three that famines and rainfall are not perfectly correlated, it might be that variation in historical rainfall shocks can better explain variation in outcomes in the present day. In order to get a better understanding of the relationship between the two, it would first be wise to look at the coefficients presented in Table IV, which are the results of the two-stage least-squares estimation using low-rainfall years as an instrument for recorded famines. Table IV follows the patterns established in Table II and Table III with regards to the significance of the coefficients as well as their signs; famines have a statistically significant and positive impact on nighttime luminosity, a significant negative impact on rural consumption, and a positive impact on the percent of the labor force employed in agriculture. The results with respect to Table II, concerning the impact of famine on the proportion of adults with a college education, are also very similar. Most other specifications do not show a significant effect of famine on the respective outcome with the exception of access to medical care. Unlike in Table II and Table III, each additional famine is associated with an additional 11.2 to 12.5 percent of villages in that district having some form of medical center or service readily accessible (according to the specifications with geographic controls, which we argue are more believable than the ones without). However, this relationship breaks down at the level of famines seen in some of our districts; a district having suffered nine or ten famines would see more than 100% of its villages having access to medical centers (which is clearly nonsensical), suggesting we may need to look for nonlinearity in the effects of famine in section six. Unfortunately, unlike in Table III, it seems that we cannot conclude much regarding the effect of famines on intergenerational mobility as the coefficients are contradictory and generally not statistically significant. For example, the coefficient on famine in the model without any controls is highly significant and positive, but the coefficient in the model with all controls is not significant and starkly negative. The same is true for the effect of famines on the Gini index. One possibility is that the positive coefficients on famine for both of these dependent variables are driven by outliers as our data was relatively limited due to factors mentioned in section 2. The magnitudes of the coefficients in Table IV are generally smaller than those presented in Table III but still significantly larger than the ones in Table II. For ex- ample, in Table II, the ordinary least-squares model suggests that each additional historical famine is associated with an additional 0.5 to 0.9 percent of the district’s workforce being employed in cultivation in 2011, but in Table IV, these numbers range from 1.5 to 4.3 percent for the same specifications, representing almost a tenfold increase in magnitude in some cases. One reason for this is the possibility attenuation bias in the ordinary least-squares regression; here, there should not be any attenuation bias in our results as the use of instruments which we assume are not correlated with any measurement error in the recording of famines excludes that possibility (Durbin 1954). On the other hand, the Hausman test for endogeneity (the econometric gold standard for testing a model’s internal validity) often fails to reject the null hypothesis that the recorded famine variable taken from Srivastava (1968) and Donaldson and Burgess (2012) is exogenous. To be precise, in one sense the test fails to reject the null hypothesis that the rainfall data add no new “information”, which is not captured in the reported famine data. It is possible that our rainfall instrument, as used in equation (2) is invalid due to endogeneity with the regression model specified in equation (1) despite being excluded from it. The only way to test this possibility is to conduct a Sargan-Han- sen test12 on the model’s overidentifying restrictions; however, we are unable to conduct the test as we have a single instrument. It follows that our model is not actually overidentified (12). Table 4 –Instrumental Variables Estimates Notes : Independent variable is number of years with recorded famines (famine code of 2 or above), instrumented with number of low-rainfall years (rainfall deviation from historic mean in bottom fifteenth percentile). Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. These are more table notes. The style is Table Notes. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). We also need to consider the viability of our instrumental variables estimates. Table V on the following page offers mixed support. While the weak-instrument test always rejects the null-hypothesis of instrument weakness, for models with more controls, namely those with geographic controls, the first-stage F-values – the test statistics of interest– are relatively small. Which is not encouraging as generally a value of ten or more is recommended to be assured of instrument strength (Staiger and Stock 1997) (13). In Table IV, we show confidence intervals obtained by inverting the Anderson-Rubin test, which accounts for instrument strength in determining the statistical significance of the coefficients. These are wider in the models with more controls, although not usually wide enough to move coefficients from statistically significant to statistically insignificant. However, additional complications arise when considering the Hausman tests for endogeneity. The p-values in Table V suggest that around half of the regression specifications in Table IV do not suffer from a lack of exogeneity, meaning that the ordinary least-squares results are just as valid for those specifications. A more serious issue is that the Hausman test rejects the null-hypothesis of exogeneity for four out of nine outcome variables. Combined with the fact that the first-stage F-statistics are concerningly low for the specifications with geographic controls, this means that not only are the ordinary least-squares results likely to be biased, but the instrumental variables estimates are also likely to be imprecise. This is most concerning for the results related to rural consumption and percent of the workforce in agriculture. Conversely, the results for nighttime luminosity are not affected as the Hausman tests do not reject exogeneity for that outcome variable. While we might simply use the ordinary-least squares results to complement those obtained via two-stage least-squares, the latter are lacking in instrument strength. More importantly, the differences in magnitude between the coefficients presented in Table II and in Table IV are too large to allow this use without abandoning consistency in the interpretation of the coefficients. Ultimately, given that the Hausman tests show that instrumentation is at least somewhat necessary, and the actual p-values for the weak-instrument test are still reasonably low (being less than 0.05 even in the worst case), we prefer to uphold the instrumental variables results as imperfect as some of them may be. We argue that it is better to have un- biased estimates from the instrumental variables procedure (IV), even if they may be less unreliable, than to risk biased results due to endogeneity problems present in ordinary least squares (OLS). Table 5 – Instrumental Variables Diagnostics Notes : The weak-instrument test p-value is obtained from comparison of the first-stage F-statistic with the chi- square distribution with degrees of freedom corresponding to the model (number of data points minus number of estimands). Independent variable is number of years in which deviation of rainfall from the historic mean is in the bottom fifteenth-percentile. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. 6. Discussion Our data suggest that there are long-run impacts of historical famines. Tables II, IV, and VII clearly show that the number of historical famines has a[72] [73] [MOU74] statistically significant, though small impact on the following: average level of economic development as approximated by nighttime luminosity, the share of the population employed in cultivation, consumption, inequality, and the provision of medical services in contemporary Indian districts. There appear to be no discernible effects on intergenerational income mobility or basic infrastructure such as electrification. The effects are quite small and are generally overshadowed by other geographical factors such as climate (i.e., latitude and temperature). They are also small in comparison to the impact of other colonial-era policies such as land-tenure systems. Nevertheless, they are still interesting to observe given that the famines in question occurred nearly a hundred years prior to the measurement of the outcomes in question. We contend that they reveal lasting and significant consequences of British food policy in colonial India. Table IV suggests that a hypothetical district having suffered ten famines - which is not atypical in our data - may have developed as much as ninety-four percent more log absolute magnitude per capita, around forty percent less consumption per capita in rural areas , 150% percent more of the workforce employed in cultivation, and a Gini index nearly ten percent greater than a district which suffered no famines. As to the question of whether or not the famines were directly caused by British policy, the results suggest that, at the very least, British nineteenth-century laissez-faire attitudes to disaster management have had long-lasting consequences for India. Moreover, these estimates are causal as the use of rainfall shocks as instruments provides a means of estimation which is “as good as random.” Therefore, we can confidently state that these effects are truly the result of having undergone the observed famines. In considering whether to prefer our instrumental estimates or our least-squares estimates, we must mainly weigh the problems of a potentially weak instrument versus the benefits of a causal interpretation. We argue that we should still trust the IV estimates even though the instrument is not always as strong as we would like. First of all, the instrumentation of the recorded famine data with the demeaned rainfall data provides plausible causal estimation due to the fact that the rainfall measures are truly as good as random. Even if the recorded famine measure is itself reasonably exogenous as suggested by the Hausman tests, we argue that it is better to be sure. Using instruments for a variable which is already exogenous will not introduce additional bias into the results and may even help reduce attenuation bias from any possible measurement error. The Hausman test, after all, can- not completely eliminate this possibility; it can only suggest how likely or unlikely it is. In this sense, the instrumental estimates allow us to be far more confident in our assessment of the presence or absence of the long-run impact of famines. Though the first-stage F-statistics are less than ten, they are still large enough to reject the null hypothesis of instrument weakness as shown by the p-values for this test in Table V. We argue that it is better to be consistent than pick and choose which set of estimates we want to accept for a given dependent variable and model. We made this choice because the differences in magnitude between the IV and OLS coefficients are too large to do otherwise. A more interesting question raised by the reported coefficients in Table II, Table IV, and Table VII has to do with their sign. Why do districts more afflicted historically by famines seem to have more economic development yet worse out- comes in terms of rural consumption and inequality by our models? This could be due to redistributive preferences associated or possibly even caused by famines; Gualtieri et al. pose this hypothesis in their paper on earthquakes in Italy. We note that districts suffering more famines in the colonial era are more “rural” to- day in that they tend to have a greater proportion of their labor force working in cultivation. This cannot be a case of mere association where more rural districts are more susceptible to famine as our instrumental estimates in Table IV suggest otherwise. Rather, we explore the possibility that post-independence land reform in India was greater in relatively more agricultural districts. Much of the literature on land-tenure suggests that redistributing land from large landowners to smaller farmers is associated with positive effects for productivity and therefore, economic development (Iyer and Banerjee 2005, Varghese 2019). If the historical famines are causally associated with districts having less equal land tenure at independence, then this would explain their positive, though small, impact on economic development by way of inducing more land reform in those districts. On the other hand, if they are causally associated with districts remaining more agricultural in character at independence, and a district’s “agriculturalness” is only indirectly associated with land reform (in they only benefit because they have more agricultural land, so they benefit more from the reform), this would indicate that famines have a small and positive impact on economic development through a process that is less directly causal. Although we are unable to observe land-tenure and agricultural occupations immediately at independence, we are able to supplement our data with addition- al state-level observations of land-reform efforts in Indian states from 1957-1992 compiled in Besley and Burgess (2010) and aggregate the district-level observations of famines in our dataset by state (14). If our hypothesis above is correct, then we should see a positive association between the number of historical famines in a state’s districts and the amount of land-reform legislation passed by that state after independence, keeping in mind that provincial and state borders were almost completely reorganized after independence. Although this data is quite coarse, being on the state level, it is widely available. However, the plot below suggests completely the opposite relationship as each additional famine across the state’s districts appears to be associated with nearly 0.73 fewer land-reform acts. Even after removing the outlier of West Bengal, which underwent far more numerous land reforms due to the ascendancy of the Communist Party of India in that state, the relationship is still quite apparent; every two additional famines are associated with almost one fewer piece of land-reform legislation post-independence. Figure 3: Historical Famine Occurrence vs Post-independence land reforms Figure 3 with West Bengal removed Therefore, there seems to be little evidence that famines are associated with land-reforms at all. This is quite puzzling because it is difficult to see how famine occurrence could lead to positive economic development while hurting outcomes such as inequality, consumption, and public goods provision. One potential explanation is that famines lead to higher urban development while hurting rural development, which would suggest a key impact of famine occurrence is the worsening of an urban-rural divide in economic development. This would explain how high er famine occurrence is linked with higher night-time luminosity, which would itself be positively associated with urbanization but is also linked with lower rural consumption, higher inequality (which may be the result of a stronger rural-urban divide), and a higher proportion of the workforce employed in the agricultural sector. For example, it is highly plausible that famines depopulate rural areas, leaving survivors to concentrate in urban centers, where famine relief is more likely to be available. Donaldson and Burgess (2012), who find that historical famine relief tended to be more effective in areas better served by rail networks, support this explanation. At the same time, the population collapse in rural areas would leave most of the workforce employed in subsistence agriculture going forward. Thus, if famines do lead to more people living in urban areas while simultaneously increasing the proportion of the remaining population employed in agriculture, then they would also exacerbate inequality and worsen rural, economic out- comes. If the urbanization effect is of greater magnitude, this would also explain the slight increase in night-time luminosity and electrification. This is somewhat supported by the plots in Figure 4, in which urbanization is defined as the proportion of a district’s population that lives in urban areas as labeled by the census. It appears that urbanization is weakly associated with famine occurrence (especially when using rainfall shocks) and positively associated with nighttime luminosity and inequality while negatively associated with rural consumption and agricultural employment as hypothesized above. However, instrumental estimates of urbanization as a result of famine detailed in Table VI only weakly support the idea that famine occurrence causally impacts urbanization as only the estimation without any controls is statistically significant. Figure 4: Urbanization Rates vs. Famine occurrence and Development outcomes Notes : The first two plots (in the top row) depict urbanization against famine occurrence and negative rainfall shocks. The rest of the plots depict various outcomes (discussed above) against the urbanization rate. Table VI –Urbanization Vs. Famine Occurrence Notes : Independent variable is percent of a district’s population that is urban as defined in the 2011 Indian census. Control specifications: (a) no controls, (b) land-tenure control (proportion of villages with tenant-ownership land tenure system), (c) geographic controls (see section three for enumeration), (d) both land-tenure and geographic controls. Source : Author calculations. *** Significant at the 1 percent level or below (p ≤ 0.01). ** Significant at the 5 percent level (0.01 < p ≤ 0.05). * Significant at the 10 percent level (0.05 < p ≤ 0.1). Nevertheless, this represents a far more likely explanation for our results than land reform, especially since the land reform mechanism implies that famine occurrence would be associated with better rural outcomes. In other words, if famines being associated with land-reform at independence was the real explanation behind our results, because the literature on land-reform suggests that it is linked with improved rural development, we would not expect to see such strongly negative rural impacts of famine in our results. Therefore, not only is the explanation of differential urban versus rural development as a result of famine occurrence better supported by our data, it also constitutes a more plausible explanation for our findings. While we do not have enough data to investigate exactly how famine occurrence seems to worsen urban-rural divides in economic development (for example, rural population collapse as hypothesized above), such a question would certainly be a key area of future study. Conclusion In this paper, we have shown that famines occurring in British India have a statistically significant long-run impact on present-day outcomes by using both ordinary least-squares as well as instrumenting for famine with climate shocks in the form of deviated rainfall. In particular, the occurrence of famine seems to ex- acerbate a rural-urban divide in economic development. Famines appear to cause a small increase in overall economic development, but lower consumption and welfare in rural areas while also worsening wealth inequality. This is supported by the finding that famines appear to lead to slightly higher rates of urbanization while simultaneously leading to a higher proportion of a district’s labor force remaining employed in the agricultural sector. Even though our ordinary-least squares measures are generally acceptable, we point to the similar instrumental variable estimates as stronger evidence of the causal impact of the famines. Ultimately, our results demonstrate that negative cli- mate shocks combined with certain disaster management policies, such as British colonial laissez-faire approaches to famine in India, may have significant, though counter-intuitive, impacts on economic outcomes in the long-run. Endnotes 1 One can essentially understand this technique as manipulating the independent variable, which may not be randomly assigned, via a randomly assigned instrument. 2 The Gini index measures the distribution of wealth or income across individuals, with a score of zero corresponding to perfectly equal distribution and a score of one corresponding to a situation where one individual holds all of the wealth or earns all of the income in the group. 3 The Durbin-Wu-Hausman test essentially asks whether adding the instrument changes bias in the model . A rejection of the null hypothesis implies that differences in coefficients between OLS and IV are due to adding the instrument, whereas the null hypothesis assumes that the independent variable(s) are already exogenous and so adding an instrument contributes no new information to the model. 4 Attenuation bias occurs when there is measurement error in the independent variable, which biases estimates downward due to the definition of the least-squares estimator as one which minimizes squared error on the axis of the dependent variable. See Durbin (1954) for a detailed discussion. 5 Classical growth theory, such as in the Solow-Swan (1957) and Romer (1994) implies long-run convergence and therefore that districts would have similar outcomes today regardless of the number of famines they underwent. However, this is at odds with most of the empirical literature as discussed previously, in which there are often measurable long-term effects to natural disasters. 6 A Poisson process models count data via a random variable following a Poisson distribution. 7 Although we use the term damage, the impact to the economy need not be negative – indeed, we find that some impacts of famine occurrence are positive in sections four and five, which we attempt to explain in section seven. 8 Normally, OLS assumes that the variance of the error term is not correlated with the independent variable(s) i.e the errors are homoscedastic. If this is not true, i.e the errors are heteroscedastic, then the standard errors will be too small. Robust least-squares estimation calculates the OLS standard errors in a way that does not depend on the assumption that the errors are homoscedastic. 9 So, for example, if this value is 25, then there is on average no mobility on average, as sons would be expected to remain in the same income percentile as their fathers. Similarly, if it is less than (greater than) 25, then there would be downward (upward) mobility. A value of 50 would indicate perfect mobility, i.e no relationship between fathers’ income percentiles and those of their sons. 10 For a brief overview of the types of systems employed by the East India Company and Crown administrators, see Iyer and Banerjee (2008), or see Tirthankar (2006) for a more detailed discussion. 11 While reduced form estimates–that is, estimating the outcomes as direct functions of the exogenous variables rather than via a structural process–are often not directly interpretable, they can serve to confirm the underlying trends in the data (for example, via the sign of the coefficients), which is why we choose to include them here. 12 The Sargan-Hansen test works very similarly to the Durbin-Wu-Hausman test, but instead uses a quadratic form on the cross-product of the residuals and instruments. 13 To be precise, this heuristic is technically only valid with the use of a single instrument, which is of course satisfied in our case anyway. 14 To be clear, the value of famine for each state is technically the average number of famines in the historical districts that are presently part of the state, since subnational boundaries were drastically reorganized along linguistic lines after independence. Bibliography Agbor, Julius A., and Gregory N. Price. 2014. “Does Famine Matter for Aggregate Adolescent Human Capital Acquisition in Sub-Saharan Africa?” African Development Review/Revue Africaine de Développement 26 (3): 454–67. Am brus, Attila, Erica Field, and Robert Gonzalez. 2020. “Loss in the Time of Cholera: Long-Run Impact of a Disease Epidemic on the Urban Land- scape.” American Economic Review , 110 (2): 475-525. Anand, R., Coady, D., Mohommad, A., Thakoor, V. V., & Walsh, J. P. 2013. “The Fiscal and Welfare Impacts of Reforming Subsidies in India”. The Inter- national Monetary Fund, IMF Working Papers 13/128. Anderson, T.W. and Rubin, H. 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics, 20, 46-63. Asher, Sam, Tobias Lunt, Ryu Matsuura, and Paul Novosad. 2019. The Socioeconomic High- Resolution Rural-Urban Geographic Dataset on India. Asher, Sam and Novosad, Paul. 2019. “Rural Roads and Local Economic Development”. American Economic Review (forthcoming). Web. Bakkensen, Laura and Lint Barrage. 2018. “Do Disasters Affect Growth? A Macro Model-Based Perspective on the Empirical Debate”. IMF Workshop on Macroeconomic Policy and Income Inequality. Bannerjee, Abhijit and Lakshmi Iyer. 2005. “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”. American Economic Review 95(4) pp. 1190- 1213. Besley, Timothy and Burgess, Robin. 2000. Land reform, poverty reduction and growth: evidence from India. Quarterly Journal of Economics, 115 (2). pp. 389-430. Bhatia, B.M. 1968. Famines in India. A Study in Some Aspects of the Economic History of India (1860- 1965) London: Asia Publishing House. Print. Bose, Sugata and Ayesha Jalal. 2004. Modern South Asia: History, Culture, Political Economy (2nd ed.) Routledge. Brekke, Thomas. 2015. “Entrepreneurship and Path Dependency in Regional Development.” Entrepreneurship and Regional Development 27 (3–4): 202–18. Burgess, Robin and Dave Donaldson. 2010. “Can Openness Mitigate the Effects of Weather Shocks? Evidence from India’s Famine Era”. American Economic Review 100(2), Papers and Proceedings of the 122nd Annual Meeting of the American Economic Association pp. 449-453. Carlyle, R. W. 1900. “Famine Administration in a Bengal District in 1896-7.” Economic Journal 10: 420–30. Cheng, Wenli, and Hui Shi. 2019. “Surviving the Famine Unscathed? An Analysis of the Long-Term Health Effects of the Great Chinese Famine.” Southern Economic Journal 86 (2): 746–72. Cohn, Bernard S. 1960. “The Initial British Impact on India: A case study of the Benares region.” The Journal of Asian Studies. Association for Asian Studies. 19 (4): 418–431. Cole, Matthew A., Robert J. R. Elliott, Toshihiro Okubo, and Eric Strobl. 2019. “Natural Disasters and Spatial Heterogeneity in Damages: The Birth, Life and Death of Manufacturing Plants.” Journal of Economic Geography 19 (2): 373–408. Davis, Mike. 2001. Late Victorian Holocausts: El Niño Famines and the Making of the Third World . London: Verso. Print. Dell, Melissa, Benjamin F. Jones, and Benjamin A. Olken. 2012. “Temperature Shocks and Economic Growth: Evidence from the Last Half Century.” American Economic Journal: Macroeconomics , 4 (3): 66-95. Dell, Melissa. 2013.“Path dependence in development: Evidence from the Mexican Revolution,” Harvard University Economics Department, Manuscript. Donaldson, Dave. 2018. “Railroads of the Raj: Estimating the Impact of Transportation Infrastructure.” American Economic Review , 108 (4-5): 899-934. Drèze, Jean. 1991. “Famine Prevention in India”, in Drèze, Jean; Sen, Amartya (eds.), The Political Economy of Hunger: Famine prevention Oxford University Press US, pp. 32–33. Dutt, R. C. 1902, 1904, 2001. The Economic History of India Under Early British Rule. From the Rise of the British Power in 1757 to the Accession of Queen Victoria in 1837 . London: Routledge. Durbin, James. 1954. “Errors in Variables”. Revue de l’Institut International de Statistique / Review of the International Statistical Institute , 22(1) pp. 23-32. Ewbank, R. B. 1919. “The Co-Operative Movement and the Present Famine in the Bombay Presidency.” Indian Journal of Economics 2 (November): 477–88. FAOSTAT. 2018. FAOSTAT Data. Faostat.fao.org, Food and Agriculture Organization of the United Nations. Fieldhouse, David. 1996. “For Richer, for Poorer?”, in Marshall, P. J. (ed.), The Cambridge Illustrated History of the British Empire , Cambridge: Cambridge University Press. Pp. 400, pp. 108–146. Goldberger, Arthur S. 1964. “Classical Linear Regression”. Econometric Theory . New York: John Wiley & Sons. Pp. 164-194. Gooch, Elizabeth. 2017. “Estimating the Long-Term Impact of the Great Chinese Famine (1959-61) on Modern China.” World Development 89 (January): 140–51. Gualtieri, Giovanni, Marcella Nicolini, and Fabio Sabatini. 2019. “Repeated Shocks and Preferences for Redistribution.” Journal of Economic Behavior and Organization 167(11): 53–71. Henderson, J. Vernon, Adam Storeygard, and David Weil. 2011. “A Bright Idea for Measuring Economic Growth.” American Economic Review. Hochrainer, S. 2009. “Assessing the Macroeconomic Impacts of Natural Disasters: Are there Any?” World Bank Policy Research Working Paper 4968. Washington, DC, United States: The World Bank. Holden, Stein T. and Hosaena Ghebru. 2016. “Land tenure reforms, tenure security and food security in poor agrarian economies: Causal linkages and research gaps.” Global Food Security 10: 21-28. Hoyle, R. W. 2010. “Famine as Agricultural Catastrophe: The Crisis of 1622-4 in East Lancashire.” Economic History Review 63 (4): 974–1002. Hu, Xue Feng, Gordon G. Liu, and Maoyong Fan. 2017. “Long-Term Effects of Famine on Chronic Diseases: Evidence from China’s Great Leap Forward Famine .” Health Economics 26 (7): 922–36. Huff, Gregg. 2019. “Causes and Consequences of the Great Vietnam Famine, 1944-5.” Economic History Review 72 (1): 286–316. Lima, Ricardo Carvalho de Andrade, and Antonio Vinicius Barros Barbosa. 2019. “Natural Disasters, Economic Growth and Spatial Spillovers: Evidence from a Flash Flood in Brazil.” Papers in Regional Science 98 (2): 905–24. Maxwell, Daniel, and Keith Daniel Wiebe. 1998. Land tenure and food security: A review of concepts, evidence, and methods . Land Tenure Center, University of Wisconsin-Madison, 1998. McKean, Joseph W. 2004. “Robust Analysis of Linear Models”. Statistical Science 19(4): 562–570. Nguyen, Linh, and John O. S. Wilson. 2020. “How Does Credit Supply React to a Natural Disaster? Evidence from the Indian Ocean Tsunami.” European Journal of Finance 26 (7–8): 802–19. Pinkovsky, Maxim L. and Xavier Sala-i-Martin. 2016. “Lights, Camera, ... In- come! Illuminating the National Accounts-Household Surveys Debate,” Quarterly Journal of Economics , 131(2): 579- 631. Li, Q. and J.S. Racine. 2004. “Cross-validated local linear nonparametric regression,” Statistica Sinica 14: 485-512. Riggs, D. S.; Guarnieri, J. A.; et al. (1978). “Fitting straight lines when both variables are subject to error.” Life Sciences . 22 : 1305–60. Romer, P. M. 1994. “The Origins of Endogenous Growth”. The Journal of Economic Perspectives . 8 (1): 3–22. Roy, Tirthankar. 2006. The Economic History of India, 1857–1947 . Oxford U India. Print. Ruppert, David, Wand, M.P. and Carroll, R.J. 2003. Semiparametric Regression . Cambridge University Press. Print. Salibian-Barrera, M. and Yohai, V.J. .2006. A fast algorithm for S-regression esti- mates, Journal of Computational and Graphical Statistics 15(2): 414-427. Scholberg, Henry. 1970. The district gazetteers of British India: A bibliography. University of California, Bibliotheca Asiatica 3(4). Sharma, Ghanshyam, and Kurt W. Rotthoff. 2020. “The Impact of Unexpected Natural Disasters on Insurance Markets.” Applied Economics Letters 27(6): 494–97. Solow, Robert M. 1957. “Technical change and the aggregate production function.” Review of Economics and Statistics. 39 (3): 312–320. Srivastava, H.C. 1968. The History of Indian Famines from 1858–1918 , Sri Ram Mehra and Co., Agra. Print. Staiger, Douglas, and James H. Stock. 1997. “Instrumental Variables Regression with Weak Instruments.” Econometrica 65(3): 557-586. Thompson, Kristina, Maarten Lindeboom, and France Portrait. 2019. “Adult Body Height as a Mediator between Early-Life Conditions and Socio-Economic Status: The Case of the Dutch Potato Famine, 1846-1847.” Economics and Human Biology 34 (August): 103–14. Varghese, Ajay. 2019. “Colonialism, Landlords, and Public Goods Provision in India: A Controlled Comparative Analysis”. The Journal of Development Studies , 55(7), pp. 1345-1363. Wang, Chunhua. 2019. “Did Natural Disasters Affect Population Density Growth in US Counties?” Annals of Regional Science 62 (1): 21–46. World Bank. 2011. “India Country Overview.” Worldbank.org Previous Next

  • The European Union Trust Fund for Africa: Understanding the EU’s Securitization of Development Aid and its Implications | brownjppe

    The European Union Trust Fund for Africa: Understanding the EU’s Securitization of Development Aid and its Implications Migena Satyal Author Jason Fu Sophie Rukin Editors Abstract Migration policies in the European Union (EU) have long been securitized; however, the 2015 migration crisis represented a turning point for EU securitization of development aid to shape migration outcomes from various African countries. In 2015, the European Union Emergency Trust Fund for Stability and Addressing Root Causes of Irregular Migration and Displaced Persons in Africa (EUTF) was created at the Valletta Summit on Migration to address the drivers of irregular migration such as poverty, poor social and economic conditions, weak governance and conflict prevention, and inadequate resiliency to food and environmental pressures. The duration of this fund was from 2016-2021. Central to the strategy of the EUTF was addressing “root causes” however, the fund came with security dimensions. Under its objective of improved migration management, the EU directed capital to various security apparatuses in Africa to limit the movement of irregular migrants and prevent them from reaching Europe. This method diverted aid from addressing the existing problems faced by vulnerable populations in the region and contributed to practices and organizations that are responsible for implementing coercive measures to limit movement of migrants and committing human rights abuses. This paper examines the political and ideological motives and objectives behind the EU's securitization of development financing via the EUTF, how it has strategically used the “root causes'' narrative to secure these arrangements, and the ways in which this pattern of interaction is inherently neo-colonial. Introduction: The European Union Trust Fund for Africa (EUTF) The European Union Emergency Trust Fund for Stability and Addressing Root Causes of Irregular Migration and Displaced Persons in Africa (EUTF for Africa) was passed in November 2015 at the Valletta Summit on Migration where European and African heads of state met to address the challenges and opportunities presented through the 2015 migration crisis. African and European heads of state recognized that migration was a shared responsibility between the countries of origin, transit, and destination. They were joined by the African Union Commission, the Economic Community of West African States, states parties to the Khartoum and Rabat Process, the Secretary General of the United Nations, and representatives of the International Organization for Migration. The Valletta Summit identified the root causes of irregular migration and forced displacement which became the guiding narrative to create and implement the EUTF. The Action Plan of the Summit stated, “the Trust Fund will help address the root causes of destabilization, forced displacement, and irregular migration by promoting economic and equal opportunities, strengthening the resilience of vulnerable people, security, and development.” Therefore, addressing these issues via development aid would limit irregular migration. The European Commission claimed that “demographic pressure, environmental stress, extreme poverty, internal tensions, institutional weaknesses, weak social and economic infrastructures, and insufficient resilience to food crises, as well as internal armed conflicts, terrorist threats, and a deteriorated security environment” needed to be addressed within the EUTF framework. However, the root cause narrative itself was partially based on assumption rather than empirical evidence. Economic data analyzing the correlation between economic development aid and migration show that the two variables have an inverse relationship. Economic and human development increase peoples’ ambitions, competencies, and resources, encouraging them to emigrate. Migration has a downward trend only when a country reaches an upper-middle income level. This concept is also known as a migration hump. Although EU officials were aware of this phenomenon, they ignored the underlying issues of the root causes narrative and proceeded to create the fund. Between 2016 and 2022 the EUTF dispersed approximately EUR 5.0 billion across 26 African countries in the Sahel and Lake Chad, North Africa, and the Horn of Africa. This funding was on top of pre-existing EUR 20 billion annual aid from the EU to these geographical regions. Despite packaging the EUTF as development aid and extracting the money almost exclusively from the European Development Fund (EDF), which specifically targets economic, social, and cultural development programs, the EUTF fell within the 2015 European Agenda on Migration, introducing a security dimension to development financing. The EU and African partner countries used a significant amount of aid from the EUTF to bolster migration management initiatives via the funding and strengthening of security apparatuses that are responsible for targeting migrants within Africa, before they could embark on their journeys to European states. Under the EUTF, improved migration management constitutes “contributing to the development of national and regional strategies on migration management, containing and preventing irregular migration, and fight against trafficking of human beings, smuggling of migrants and other related crimes, effective return and readmission, international protection and asylum, legal migration, and mobility.” It includes increasing logistical capabilities by providing capital to train border agents, and bolstering surveillance infrastructure to monitor citizens’ movement, and expanding logistical capacities. In some cases, it also relies on encouraging certain policies in recipient countries to align with the priorities of the donor countries. As shown in EUTF annual reports (Figures 1.1-1.6), there was an increasing diversion of capital towards funding migration management projects in Africa, which came at the expense of economic development projects. By using aid to fund security goals, the EU securitized and politicized development financing. Securitization in migration policy refers to the externalization and extra-territorialization of migration control through border controls and reclassification of various activities like drug trafficking, illegal immigration, and delinquency of migrants as national security concerns. Still, some EUTF funding went towards projects geared at economic development. As stated in the Action Plan and shown in subsequent annual reports, the EUTF implemented programs that promoted job creation, education, entrepreneurship, and building resiliency. However, they also used the money from the development package to strengthen migration management initiatives and shift responsibilities to third countries in Africa, ultimately creating “legal black holes” where European norms about human rights did not apply. Despite the clear evidence of the EU’s contribution to abuses towards African irregular migrants, the EU continues to implement repressive policies through various externalization mechanisms and faulty narratives that have been empirically proven to not work – such as the root causes narrative – in order to further its own interests in the African continent. Research Question The practice of funneling capital toward security-related migration management projects raises the following question: Why has the EU opted to securitize its development aid through EUTF in the aftermath of the 2015 migration crisis? Furthermore, what are the implications of aid securitization in terms of development aid effectiveness, human rights practices, and the EU’s external legitimacy as a normative actor? To answer these overarching questions and understand the promotion and proliferation of migration policies through pacts like the EUTF requires an inward look into the European Union and its political and ideological interests in the migration policy domain. Therefore, I propose that the EUTF was a neo-colonial mechanism through which European member states could further their migration policy priorities into certain African states thereby reinforcing their colonial legacy hierarchies. Methodology First, I will provide background information about the EUTF, highlighting its objectives and strategies for development aid implementation and effectiveness. Then, I will provide quantitative data regarding the dispersion of money from the EUTF to show the increasing investments toward migration management schemes. Understanding these specificities and inherent challenges of the EUTF will contextualize my hypotheses. Next, I will support my hypothesis through case studies of specific EUTF security operations in African countries, analysis of the EU’s previous migration policies, interviews with African and European Union stakeholders about EUTF’s development and impact, and various theories to help explain how the EU navigates its migration policies. Finally, I will assess the implications of aid securitization in both Europe and Africa. My research will rely on official documents from the EU about its migration agenda and policies. It will also use data from academic journals and previous literature that have examined the trajectory of the EU’s migration-development nexus, specifically through the EUTF. Assessing the current nature of the EU’s migration policies will be useful in helping guide future policies. As migration becomes an increasingly salient issue, it is crucial to determine strategies or “best practices” that are humane and sustainable to address it. Adhering to human rights norms should be at the center of these policies. Background The Action Plan of the Valletta Summit was based on five priority domains: Reducing poverty, advancing socio-economic development, promoting peace and good governance. Facilitating educational and skills training exchanges between EU and EU member states as well as the creation of legal pathways of employment for migrants and returnees. Providing humanitarian assistance to countries needing food assistance, shelter, water, and sanitation. Fighting against irregular migration, migrant smuggling, and trafficking. Facilitating the return, readmission, and reintegration of migrants. During Valletta, Martin Schulz, the former President of the European Parliament stated, “By boosting local economies through trade, for example through economic partnership agreements and through ‘aid for trade’ programs, by investing in development and by enhancing good governance people will be enabled to stay where they want to be ‘at home.’” He reiterates that the purpose of the EUTF is not “fight the migrants” but rather, “fight the root causes of migration: poverty and conflict.” This seemingly proactive approach underscores the belief that addressing the primary drivers of migration by promoting development measures will empower people to remain in their respective countries by choice rather than feeling compelled to migrate elsewhere. “Root Causes”: Overlooking Evidence The problem with the EU’s understanding and use of the “root causes” narrative is that it ignores how wage differentials contribute to migratory patterns. Wage differentials refers to the discrepancy in wages for similar jobs due to factors like industry or geography. While development aid can be effective, it is not enough to redistribute wealth and address the deep structural inequalities of the global economy that drive migration to more developed and wealthier countries. Subsequent sections will elaborate further on the adoption of the root causes framing. EUTF Annual Aid Reports (2016-2022) As stated in the Valletta Summit political declaration, the EU was committed to “address the root causes of irregular migration” through the EUTF. However, aid allocation data (Figures 1-1.6) from EUTF annual reports, which highlight the distribution of aid in amount and percentage terms by geographical window and five of the EUTF’s objectives, show an increased prioritization of implementing migration management schemes at the expense of development projects between 2016 and 2022. In 2016 (Figure 1), when EUTF was in the implementation phase, EU officials distributed significantly more funds for economic development projects across North Africa, the Sahel, and Horn of Africa than any other domains which aligned with the root causes narrative that was emphasized at Valletta. In 2017 (Figure 1.1), the allocation for improved migration management significantly increased across the three regions. In North Africa, funding for economic development, strengthening resilience, and conflict prevention was eliminated while EUR 285 million was given to migration management. This pattern is strategic due to the geographic proximity of the region to southern European borders. In 2018 (Figure 1.2), North Africa remained the biggest recipient of migration management funds but did not receive funding for development projects. In 2019 (Figure 1.3), 31.56 percent of total funding was invested in migration management. In 2020 (Figure 1.4), 2021 (Figure 1.5), and 2022 (Figure 1.6), improved migration management projects continued to receive most funding at the expense of other objectives. The funding patterns outlined in these reports show the EU’s increased focus towards its migration objectives. Figure 1: EUTF Projects Approved in 2016 Figure 1.1: EUTF Projects Approved in 2017 Figure 1.2: Projects Approved in 2018 Figure 1.3: Projects Approved in 2019 Figure 1.4: Projects approved in 2020 Figure 1.5: Projects approved in 2021 Figure 1.6: Projects approved in 2022 Taking the background information and data into account, I will prove my hypothesis, explaining why the EU increasingly invested in migration management projects in the following sections. Defining Neo-Colonialism The concept of ‘neo-colonialism; was coined by Kwame Nkrumah’s Neo-Colonialism: The Last Stage of Imperialism, in which he argues that neo-colonialism is a contemporary form of colonialism that is perpetuated through less traditionally coerciece methods, such as development aid. This theory can be applied when assassing relations and interdependency between former colonial states with formerly colonized states. Interdependence is manufactured by former colonial powers that “[give] independence” to their subjects, only for them to follow up by allocating aid.” They speak about guaranteeing independence and liberation but never implement policies to preserve them in an effort to maintain their influence and objectives via unobstrusive and monetary means rather than directly coercive ones. As a result, these countries’ economic system, and thus their political policy, is “directed from outside” through foreign capital.” EUTF as a Neo-Colonial Instrument In the 19th and 20th centuries, European powers reshaped all aspects of African society, through colonialism, for their own strategic imperatives. These included, but were not limited to, extraction of material resources, manufactured dependency, and assertion of European institutions and policies at the expense of indigenous cultures and institutions. The complete overhaul of pre-colonial Africa interrupted economic and political development in the region and led to its continued structural subordination despite achieving independence from European colonial states in the 21st century. As a result, the repercussions of colonialism have contemporary implications in EU-Africa relations. During the colonial era, colonial powers used military power and additional forms of coercive strategies to assert foreign influence; currently, former colonial powers capitalize on the weaknesses of African countries and use political and economic measures to gain influence. Colonialism never disappeared, but rather, evolved into neo-colonialism. This concept is demonstrated in the framework of the EUTF which, despite being a development aid package and product of a seemingly coordinated multilateral process, imposed conditionalities and security measures on African states to achieve political goals in the field of migration. Under the EUTF, patterns of cooperation between European countries and their former colonies to limit migration are also prevalent, especially in the case of Libya and Niger. These initiatives safeguard colonial-era power structures and undermine the sovereignty of the respective African states. The EU took advantage of its status as a donor institution through three mechanisms that enforced hierarchies between African and European powers: The governance structure,designed to dismiss African stakeholder engagement EU’s imposition of positive and negative conditionalities to certain African states The strategic partnerships between European and African states to implement migration management programs These steps demonstrate the EU’s broader goals to assert their influence in the region’s migration policies by implementing security schemes, jeopardizing the needs of African states and the preservation of human rights in the process. The use of EUTF to conduct such projects signals the “de facto policy purchase” on the African government’s stance on migration. Consequently, African states become an “instrument” for European neo-colonial policies, especially in the migration domain. Eurafrica to Modern EU-Africa Relations The legacy and discourse of colonialism and neo-colonialism are not equal among EU member states. Many European countries were colonial powers, with the exception of Ireland and Malta, along with several central European countries that were subjugated to the authority of larger imperial powers. However, specific past actions hold little significance when discussing the broader nexus between European integration, the European Union, and colonialism. In Eurafrica: The Untold History of European Integration and Colonialism , Peo Hansen and Stefan Jonsson argue that there was a vast overlap between the colonial and European projects. Several African countries, under colonialism, have historically played a key role in efforts towards European integration and unity from the 1920s to the 1950s under the geo-political concept of Eurafrica. According to this idea, European integration would only occur through “coordinated exploitation of Africa and Africa could be efficiently exploited only if European states cooperated and combined their economic and political capacities.” The pan-European movement in the interwar period was based on conditions for peace through a “united colonial effort” in Africa. Eurafrica turned into a political reality with the emergence of the European Economic Community (EEC) made up of Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany, along with colonial possessions that were referred to as “overseas countries and territories” (OCTs). For the EEC, Africa served as a “necessity,” “a strategic interest,” “an economic imperative,” “a peace project,” “a white man’s burden,” and “Europe’s last chance.” Put differently, “Africa was indispensable for Europe’s geopolitical and economic survival.” Africa became the guiding force of European integration and Eurafrica became a system through which colonial powers could preserve their empires. Eurafrica, in its original form, did not materialize because African countries took back control from European colonial powers, but its legacy is crucial to the development of the EEC and modern EU-Africa relations. Today, the EU describes its relationship with Africa in terms like “interdependence,” and “partnership of equals.” Nonetheless, the EU’s colonial past still plays a significant role in its foreign policy with Africa as it promotes the adoption of European rules and practices in its “normative empires.” The continuation of these empires has cemented core-periphery dynamics of interaction, which ultimately advances European interests, especially in the migration domain. Specifically, the EU’s externalization of border and migration management efforts to transfer the European model of governance to third countries have transformed them into “southern buffer zones” to curtail unwanted migration and enhance Europe’s sense of security. Such measures demonstrate the separation of physical borders from functional regimes in Europe’s fluid borderlands, which are antecedent to imperial practices when control was extended beyond territorial boundaries. These practices are evident in the EU’s security operations through pacts like the EUTF, EU-Turkey Deal, and Operation SOPHIA. These externalization policies ensure the continuity of the vision derived from the Eurafrica project in the 21st century. Conditional aid The EUTF was conditional as it leveraged development aid to finance security-related migration projects and imposed positive and negative conditionalities that were used as leverage for African cooperation. When the European Commission announced its Migration Partnership Framework in 2016, it stated that development and trade policies will use positive and negative conditionalities to encourage cooperation on EU’s migration management projects. The “more for more, less for less” framework embedded into development financing means that “African governments use migration cooperation as a bargaining chip for procuring finance through renting inherent powers of state sovereignty to control entry and exit.” This coercive and concessional method contradicts the nature of cooperation that was emphasized at the Valletta Summit in 2015 and undermines the autonomy of the African states as these conditionalities perpetuate neo-colonial practices. EUTF Governance Structure and Oversight The EUTF was a product of a multilateral decision-making process. However, its governance structure, which limits proper stakeholder engagement from African representatives, signals the EU’s push to prioritize its policies over development in Africa. The European Commission claims that it is taking a bottom-up approach where the EU delegations play a key role in identifying and formulating the EUTF through consultations and dialogues to build partnerships with local stakeholders (civil society organizations, national and local authorities, and representatives). Subsequently, proposals are created by the EUTF for African teams based on EU Commission Headquarters and EU delegations. Then, the proposal is submitted to the Operational Committee for approval. Once approved, the proposals are implemented via EU member states’ authorities, developmental and technical cooperation agencies, civil society organizations, international or UN organizations, and private sector entities. The governance of the EUTF is dependent upon the Strategic Board and Operational Committees for each of the three regions where the EUTF distributed funds. The Strategic Board is responsible for “adopting the strategy of the EUTF, adjusting the geographical and thematic scope of the EUTF in reaction to evolving issues, and deciding upon amendments to the guiding document establishing the internal rules for the EUTF.” The board is chaired by the European Commission and composed of representatives and contributing donors. The Operational Committee is responsible for “reviewing and approving actions to be financed, supervising the implementation of the actions, and approving the annual report and accounts for transmission to the Strategic Board.” In the Board and the Committee, the African partner countries can only act as observers and do not hold decision-making powers. This management framework is ineffective as it is designed to limit the participation of African parties that have more comprehensive knowledge regarding the needs of the continent and areas where funds need to be directed. However, they are structurally silenced. The classification of the EUTF as development aid from the EU to Africa also provided a loophole under which parliamentary oversight was not required. The European Development Fund, which operates outside the EU budget, funded most of the aid, bypassing conventional parliamentary procedures, allowing for swift implementation of the fund. As a spokesperson for the European Commission’s DG DEVCO claimed that simplifying the procedures allows for more flexibility so projects can be implemented earlier. Proponents of the fund believe that the easy implementation process is what makes it advantageous. However, opponents of the fund like Elly Schelien, a member of the European Parliament’s Development Committee, claimed that the EU Parliament has not been given “the right democratic scrutiny” of the fund. The framing of the fund as an “emergency instrument” led to retracted bureaucratic measures to increase effectiveness as project cycles were much shorter than traditional development programming. The consolidation of power to the EU institutions and representatives meant that EUTF projects were “identified at the country level under the leadership of the EU Delegations, discussed and selected by an Operational Committee.” Engagement from African stakeholders and civil society was not required. An interview with a representative from the Operational Committee revealed that EUTF “projects were simply approved without discussion. Negotiations took place upstream between EUTF managers, European agencies, EU delegations, and partner countries.” This form of decision-making amplifies hierarchical structures between European and African representatives. Strategic Partnerships Certain EU member states partnered with African states to implement migration management programs in which they exercised authority over the movement of migrants within Africa, especially in the origin and transit countries. Not only do these policies directly conflict with the EU’s stated commitments regarding development aid and cooperation with partner countries, but the EU’s agenda is antecedent to European empires leveraging local African officials to undertake security operations in the continent. Today, this exploitative relationship is parallel to the EU’s allocation of capital, military equipment, and capacity-building instruments to African representatives who adhere to the needs of EU leaders. This pattern is visible in various projects and funding executed under the EUTF. Though reluctant to enter into such agreements with Europe, African policymakers are forced into a “perpetual balancing act, juggling domestically-derived interests with the demands of external donor and opportunity structures.” This concession stems from inherent power asymmetry between relatively weak and powerful states, upholding colonial legacy hierarchies. Case Studies on Libya and Ethiopia In the following section, I use Libya and Ethiopia as case studies to provide evidence that EUTF’s prioritization of funding migration management projects, increasing policing and surveillance in these countries, and imposing positive and negative conditionalities are reflective of neo-colonial practices to assert dominance over the movement of African irregular migrants. I chose these countries to study because each one falls within one of the two geographical windows and serves either as a popular departure or transit country where the European Union is heavily involved in migration management projects. Libya Libya is a major departure country for migrants from West African countries of origin such as Nigeria, Guinea, Gambia, Ivory Coast, Mali, and Senegal. Italy demonstrated strategic interest in Libya due to its geographical proximity and colonial legacy. Between 2017 and 2022, the Italian Ministry of Interior (MI) led implementations of various migration management projects that sought to curb the arrival of migrants into Italy. In 2017, MI led the first phase of its project called “Support to Integrated Border and Migration Management in Libya” with a budget of EUR 42.2 million and a EUR 2.23 million co-financing from Italy. The principal objective of this phase was migration management. Focus areas included strengthening border control, increasing surveillance activities, combatting human smuggling and trafficking, and conducting search and rescue operations. The second phase of this project was launched by MI in 2018 until 2024 for EUR 15 million. This phase was focused on capacity-building activities and institutional strengthening of authorities such as the Libyan Coast Guard and the General Administration of Coastal Security. It also advanced the land border capabilities of relevant authorities and enhanced search and rescue (SAR) capabilities by supplying SAR vessels and corresponding maintenance programs. The beneficiaries of this project included 5,000 relevant authorities from the Libyan Ministry of Interior (MoI), Ministry of Defense (MoD), and Ministry of Communications. The indirect beneficiaries include “future migrants rescued at the sea due to the procession of life-saving equipment to Libyan Coast Guard and General Administration for Coastal Security for them to be able to save lives.” Italy’s actions under the EUTF compromise the proper use of development financing tools by diverting them for the use of security-related projects. Its engagement and strengthening of Libyan security apparatuses such as the Libyan Coast Guard also undermine the values of human rights that EU member states claim to promote in their foreign policies as the Libyan Coast Guard is notorious for violating non-refoulment principles and committing human rights violations such as extortion, arbitrary detention, and sexual violence against migrants and asylum seekers. Recognizing brutal actions by the border authorities and the deplorable living conditions in detention centers in Libya, the Assize Court in Milan condemned the torture and violence inflicted in these centers. In November 2017, the UN High Commissioner on Human Rights released a statement criticizing the EU’s support for the Libyan Coast Guard as “inhumane” as it led to the detention of migrants in “horrific” conditions in Libya. Despite institutional disapproval of the EU’s and Italy’s involvement in Libya, funding for these security projects continued. Ethiopia While Ehtiopia was never formally colonized, it remained under Italian occupation from 1935-1941 and subsequently fell under (in)formal British control from 1941-1944. The EUTF initiatives in Ethiopia do not show the same patterns of cooperation as seen in Libya and Niger, since Ethiopia served as a key interest to the EU due to its status as one of the main countries of origin, transit, and destination for migrants and refugees. EUTF report from 2016 highlighted that Ethiopia hosts over one million displaced people. It is also the biggest recipient of EUTF funding in the Horn of Africa. Its geographical proximity to countries like Eritrea, Somalia, and South Sudan has vastly affected its migration demographics, making it a focus area for the EU’s development aid under the EUTF. While there pre-existing migration management schemes in Ethiopia, they were concerned with the returns and reintegration of irregular Ethiopian migrants and refugees rather than building up the capacity of various security actors as seen in other regions. This objective was linked with positive conditionalities as the Third Progress Report on the Partnership Framework with third countries under the European Agenda on Migration links progress in the returns and readmissions field with more financial support for refugees that reside within Ethiopia. Additional projects in Ethiopia were geared towards economic development and focused on addressing the root causes as outlined in Valletta. Some of these initiatives included job creation, providing energy access, healthcare, and education to vulnerable populations which are in line with development cooperation. However, the European Union’s increasing focus on returns and readmission of Ethiopian migrants can decrease revenue derived from remittances which contribute three times more to the Ethiopian economy than development financing. This approach ensures the fulfillment of the EU’s migration interests while undermining Ethiopia’s economic needs. Ethiopian officials also expressed disappointment with the EUTF measures because they were guided by the EU’s focus on repatriation, thereby eroding migration cooperation with Ethiopia. In regards to EU interests in Ethiopia, an EU official claimed: “We can pretend that we have joint interest in migration management with Africa, but we don't. The EU is interested in return and readmission. Africa is interested in root causes, free movement, legal routes, and remittances. We don't mention that our interests are not aligned.” This non-alignment in interests is irrelevant to the EU because it is the more dominant actor and has the power to assert its priorities by using money as leverage. However, this pattern of interaction comes at the cost of losing cooperation with Ethiopian stakeholders and diverging finances from refugee and migrant populations in Ethiopia who need humanitarian assistance. Perspectives from Africa African representatives and ambassadors displayed suspicion about the fund’s motives and called on the EU to fund projects that increase economic opportunities in their respective countries. As Nigerien mayor of Tchirozerine Issouf Ag Maha stated, “as local municipalities, we don’t have any power to express our needs. The EU and project implementers came here with their priorities. It’s a ‘take it or leave it’ approach, and in the end, we have to take it because our communities need support.” Maha’s statement highlights the role the EU plays in shaping the direction of development money and how its priorities overshadow decisions and input from local officials, who are significantly more knowledgeable about the needs of their communities. Despite diverging interests and priorities, African officials concede to their demands because their communities require financial resources to alleviate hardships. President Akufo-Addo from Ghana claimed that “ instead of investing money in preventing African migrants from coming to Europe, the EU should be spending more to create jobs across the continent.” Similarly, Senegalese President Mackey Sall and former Chairperson of the African Union warned that the trust fund to tackle the causes of migration is not sufficient to meet the needs of the continent stating, stated that “if we want young Africans to stay in Africa, we need to provide Africa with more resources.” The allocation of aid to security-related projects comes at the expense of funding genuine development projects that align with the needs of African communities. It also takes advantage of the ‘cash-starved’ governments.” These statements underscore the necessity of the EUTF to direct capital towards structural and sustainable economic development as opposed to combatting, detaining, or returning migrants. However, the EU has not been responsive to these inputs from its African stakeholders despite stressing the importance of cooperation and partnership during the Valletta Summit. Reinforcing Power Imbalances The imposition of European policies and priorities through the EUTF takes advantage of African nations' relatively weaker economic standing and agency, showing that the political and security needs of powerful states and institutions determine where and how development aid is designated. It also shows the continued influence and intervention of European interests into their ostensibly independent former colonial holdings, therefore reiterating Nkrumah’s theory that foreign capital, such as development aid, can be used for the exploitation of developing countries by their former colonial powers. This hypocrisy goes against the EU’s normative approaches to its foreign policy while also continuing to reinforce power imbalances and colonial-era hierarchies between Europe and Africa. Discussion Critically examining the European Union Trust Fund in the broader context of EU-Africa relations demonstrates how EUTF represents a complex intersection of historical legacies, political interests and expediency, and political ideologies that determine attitudes towards migrants and refugees and thus, shape policy outcomes. These factors reinforce each other by showing the multifaceted nature of migration governance. The neo-colonialism lens in my hypothesis provides historical context to show how enduring colonial legacies continue to guide policies today. This lens also forms the basis for discourse about EU-Africa relations because of the visible power imbalances that are sustained through policies like the EUTF which are structurally designed to achieve European political interests at the expense of the needs of African states. As seen through the case studies on Libya, Niger, and Ethiopia, development aid is not always allocated for the benefit of the recipient. Rather, aid can be abused as a political tool to reach the objectives of the donor institutions. Despite the rhetoric of cooperation between stakeholders, preservation of human lives, equal partnership, and addressing root causes, as stated in Valletta, the strategic policy design of the EUTF highlights the persistence of neo-colonialism because it continues historical patterns of exploitation and hierarchies between Europe and Africa. Conclusion The findings in this paper show that EUTF was not merely a development instrument but also a political one that came with negative consequences for African irregular migrants. The securitization of aid along with the EU’s other externalization policies have not effectively solved the problems that have caused the migration crisis. Instead, it has reinforced them. The model of the EU’s migration policies under the EUTF has also created issues beyond the realm of migration. As discussed, it continues to sustain power imbalances between Europe and Africa, shift aid priorities, and undermine development goals.Addressing the migration crisis will require a paradigm shift in the EU migration policy domain. The EU needs to deviate away from a security-based approach to a holistic and rights-based approach. This ideological reform requires the EU to look inward to address its own limitations and failures by recognizing its neo-colonial practices, acting out of mutual rather than political interests, and lastly, collectively humanizing migrants and refugees arriving to Europe for safety and opportunities. Through these measures, the EU and African stakeholders can address the true root causes of migration – which stem from structural global inequalities. References “A European Agenda on Migration.” European Commission. November 2015. https://www.consilium.europa.eu/media/21933/euagendafor-migration_trustfund-v10.pdf Abdelaaty, Lamis. “European countries are welcoming Ukrainian refugees. It was a different story in 2015.” The Washington Post. March 23, 2022. https://www.washingtonpost.com/politics/2022/03/23/ukraine-refugees-welcome-europe/ Abrahams, Jessica. “Red flags raised over governance of EU Trust Fund projects.” Devex. September 22, 2017. https://www.devex.com/news/red-flags-raised-over-governance-of-eu-trust-fund-projects-91027 “Agreement Establishing The European Union Emergency Trust Fund For Stability And Addressing Root Causes Of Irregular Migration And Displaced Persons in Africa, And Its Internal Rules.” Trust Fund for Africa. 2015. https://trust-fund-for-africa.europa.eu/document/download/4cb965d7-8ad5-4da9-9f6d-3843f4bf0e82_en?filename=Constitutive%20Agreement%20https://trust-fund-for-africa.europa.eu/document/download/4cb965d7-8ad5-4da9-9f6d-3843f4bf0e82_en?filename=Constitutive%20Agreement%20 Alcalde, Xavier. “Why the refugee crisis is not a refugee crisis.” International Catalan Institute for Peace. Accessed March 14, 2024. https://www.icip.cat/perlapau/en/article/why-the-refugee-crisis-is-not-a-refugee-crisis/ Allen, Peter. “French politician says country is 'white race' and immigrants should adapt or leave.” The Mirror. September 27, 2015. https://www.mirror.co.uk/news/world-news/french-politician-says-country-white-6528611 Bachman, Bart. “Diminishing Solidarity: Polish Attitudes toward the European Migration and Refugee Crisis.” Migration Policy. June 16, 2016. https://www.migrationpolicy.org/article/diminishing-solidarity-polish-attitudes-toward-european-migration-and-refugee-crisis Ball, Sam. “France’s far-right National Front tops first round of regional vote.” France24. July 12, 2015. https://www.france24.com/en/20151206-france-far-right-national-front-le-pen-tops-first-round-regional-election Boswell, C. “The ‘external dimension’ of EU immigration and asylum policy.” International Affairs , 79, no. 3 (2003): 619–639. https://onlinelibrary.wiley.com/doi/abs/10.1111/1468-2346.00326 Campbell, Zach.“Europe’s deadly migration strategy.” Politico. February 28, 2019. https://www.politico.eu/article/europe-deadly-migration-strategy-leaked-documents/ Cantat, Celine. “The ideology of Europeanism and Europe’s migrant other,” in International Socialism 152 ( October 2016). https://isj.org.uk/the-ideology-of-europeanism-and-europes-migrant-other/ Castillejo, Clare.“The EU Migration Partnership Framework: Time for a Rethink?” German Development Institute.” 2017. https://www.idos-research.de/uploads/media/DP_28.2017.pdf Chase, Jefferson. “AfD: From anti-EU to anti-immigration.” DW. October 28, 2019. https://www.dw.com/en/afd-what-you-need-to-know-about-germanys-far-right-party/a-37208199 Chebel d’Appollonia, Ariane. Frontiers of Fear: Immigration and Insecurity in the United States and Europe . Ithaca, NY: Cornell University Press, 2012 “CTR - BUDGET SUPPORT - Contrat relatif à la Reconstruction de l'Etat au Niger en complément du SBC II en préparation / Appui à la Justice, Sécurité et à la Gestion des Frontières au Niger.” European Commission. Accessed March 14, 2024. https://eutf.akvoapp.org/dir/project/5651 De Guerry, Olivia and Andrea Storcchiero. “Partnership or Conditionality? Monitoring the Migration Compacts and EU Trust Fund for Africa.”Concord Europe. 2018. https://concordeurope.org/wp-content/uploads/2018/01/CONCORD_EUTrustFundReport_2018_online.pdf “David Cameron: 'Swarm' of migrants crossing the Mediterranean.” BBC. July 30, 2015. https://www.bbc.com/news/av/uk-politics-33714282 European Commission. “Commission announces New Migration Partnership Framework: reinforced cooperation with third countries to better manage migration.” Media release. June 7, 2017. https://ec.europa.eu/commission/presscorner/detail/en/IP_16_2072 European Commission. “Fourth Progress Report on the Partnership Framework with third countries under the European Agenda on Migration.” no. 350. June 13, 2017. https://www.eeas.europa.eu/sites/default/files/4th_progress_report_partnership_framework_with_third_countries_under_european_agenda_on_migration.pdf European Council. “Remarks by President Donald Tusk at the press conference of the Valletta summit on migration.” Press release. November 12, 2015, https://www.consilium.europa.eu/en/press/press-releases/2015/11/12/tusk-press-conference-valletta-summit/ “EU Solidarity with Ukraine.” European Council. Accessed March 14, 2024. https://www.consilium.europa.eu/en/policies/eu-response-ukraine-invasion/eu-solidarity-ukraine/#:~:text=affordability%20(background%20information)-,Humanitarian%20aid,and%20host%20families%20in%20Moldova . “EU-Turkey joint action plan.” European Commission. October 15, 2015. https://ec.europa.eu/commission/presscorner/detail/en/MEMO_15_5860 Fanta, Esubalew B. “The British on the Ethiopian Bench: 1942–1944.” Northeast African Studies 16, no. 2 (2016): 67-96. https://www.jstor.org/stable/10.14321/nortafristud.16.2.0067 FitzGerald, David S. “Remote control of migration: Theorising territoriality, shared coercion, and deterrence.” Journal of Ethnic and Migration Studies , 46, no. 1 (2020): 4–22. https://www.tandfonline.com/doi/full/10.1080/1369183X.2020.1680115 Gray, Meral. Amanda, “Learning the lessons from EU-Turkey deal: Europe’s renewed test.” Accessed March 14, 2024. https://odi.org/en/insights/learning-the-lessons-from-the-euturkey-deal-europes-renewed-test/ Hansen, Peo and Stefan Jonsson. Eurafrica: The Untold History of European Integration . London: Bloomsbury Publishing, 2014. “International Affairs.” European Commission. Accessed March 14, 2024. https://home-affairs.ec.europa.eu/policies/international-affairs_en Islam, Shada.“Decolonising EU-Africa Relations Is a Pre-Condition For a True Partnership of Equals.” Center for Global Development. February 15, 2022, https://www.cgdev.org/blog/decolonising-eu-africa-relations-pre-condition-true-partnership-equals#:~:text=Senegalese%20President%20Mackey%20Sall%20who,need%20to%20provide%20Africa%20with Kabata, Monica and Jacobs, An. The ‘migrant other’ as a security threat: the ‘migration crisis’ and the securitising move of the Polish ruling party in response to the EU relocation scheme.” Journal of Contemporary European Studies , 13, no. 4 (November 13, 2022): 1223-1239 https://www.tandfonline.com/doi/full/10.1080/14782804.2022.2146072 Khakee, Anna. “European Colonial Pasts and the EU’s Democracy-promoting Present: Silences and Continuities.” Italian Journal of International Affairs 57, no 3, (2022): 103-120. https://www.tandfonline.com/doi/abs/10.1080/03932729.2022.2053352 Kirisci, Kemal. “As EU-Turkey migration agreement reaches the five-year mark, add a job creation element.” Brookings. March 17, 2021. https://www.brookings.edu/articles/as-eu-turkey-migration-agreement-reaches-the-five-year-mark-add-a-job-creation-element/ Kundnani, Hans. Eurowhiteness: Culture, Empire, and Race in the European Project. London: Hurst Publishers, 2023. Langan, Mark. Neo-colonialism and The Poverty of Development in Africa. Cham: Palgrave Macmillan, 2018. Lehne, Stefan. “How the Refugee Crisis Will Reshape the EU.” Carnegie Europe. February 4, 2016. https://carnegieeurope.eu/2016/02/04/how-refugee-crisis-will-reshape-eu-pub-62650 Liguori, Anna. Migration Law And Externalization of Border Controls. Abingdon: Routledge, 2019. Mager, Therese. “The Emergency Trust Fund for Africa: Examining Methods and Motives in the EU’s External Migration Agenda.” United Nations University on Institution on Comparative Regional Integration Studies. 2018. https://cris.unu.edu/sites/cris.unu.edu/files/UNU-CRIS%20Policy%20Brief%202018-2.pdf Mainwaring, Ċetta. “Constructing Crises to Manage: Migration Governance and the Power to Exclude.” At Europe’s Edge: Migration and Crisis in the Mediterranean. Oxford: Oxford Academic, 2019. https://academic.oup.com/book/32397/chapter/268689331 Maru, Mehari T. “Migration Policy-making in Africa: Determinants and Implications for Cooperation with Europe.” Working Paper 2021/54, European University Institute, 2021. https://cadmus.eui.eu/handle/1814/71355 Micinski, Nicholas and Kelsey Norman. Migration Management Aid, Governance, and Repression . Unpublished manuscript. Accessed March 14, 2024. “Mission.” EUNAVFOR MED Operation Sophia. Accessed March 14, 2024. https://www.operationsophia.eu/about-us/#:~:text=EUNAVFOR%20MED%20operation%20Sophia%20is,poverty%2C%20climate%20change%20and%20persecution . Moravcsik, Andrew. Review of Eurowhiteness: Culture, Empire, and Race in the European Project by Hans Kundnani. Foreign Affairs. October 23, 2023. https://www.foreignaffairs.com/reviews/eurowhiteness-culture-empire-and-race-european-project Nkrumah, Kwame . Neo-Colonialism, the Last Stage of Imperialism . London: Panaf, 1965. Oliviera, Ivo and Vince Chadwick.“Gabriel compares far-right party to Nazis.” Politico. October 23, 2015. https://www.politico.eu/article/sigmar-gabriel-compares-far-right-alternative-for-germany-afd-to-nazis-interview-rtl/ “Objective and Governance.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/our-mission/objective-and-governance_en Pacciardi, Agnese.“A European narrative of border externalization: the European trust fund for Africa story,” in European Security . January 22, 2024. https://www.tandfonline.com/doi/full/10.1080/09662839.2024.2304723?src=#:~:text=The%20EUTF%20narrative%20portrays%20diaspora,their%20countries%20of%20origin%2C%20so Pare, Celine.“Selective Solidarity? Racialized Othering in European Migration Policies.” Amsterdam Review of European Affairs 1:1. Pages 42-54. https://www.europeanhorizonsamsterdam.org/_files/ugd/79a695_dbd76026a17f488ea00cae358bfebe8d.pdf#page=47 Reilly, Rachael, and Michael Flynn. “The Ukraine Crisis Double Standards: Has Europe’s Response to Refugees Changed?” Media Release. March 2, 2022. https://reliefweb.int/report/ukraine/ukraine-crisis-double-standards-has-europe-s-response-refugees-changed Rieker, Pernille and Marianne Riddervold. “Not so unique after all? Urgency and norms in EU foreign and security policy.” Journal of European Integration 44, no 4 (September 21, 2021): 459-473. https://www.tandfonline.com/doi/full/10.1080/07036337.2021.1977293 Roberts, Bayard, Adrianna Murphy, and Martin Mckee. Europe’s collective failure to address the refugee crisis” in Public Health Reviews, 37, no 1 (2016): 1-5. https://publichealthreviews.biomedcentral.com/articles/10.1186/s40985-016-0015-6 Sahin-Mencutek, Zeynep, Soner Barthoma, N. Ela Gökalp-Aras & Anna Triandafyllidou, “A crisis mode in migration governance: comparative and analytical insights,” in Comparative Migration Studies 10, no 12 (March 21, 2022): 1-19. https://comparativemigrationstudies.springeropen.com/articles/10.1186/s40878-022-00284-2#ref-CR22 Santos, Mireia F. “Three lessons from Europe’s response to Ukrainian migration.” European Council on Foreign Relations. August 9, 2023. https://ecfr.eu/article/three-lessons-from-europes-response-to-ukrainian-migration/ Schacht, Kira. “EU uses development aid to strongarm Africa on migration.” European Data Journalism. April 13, 2022. https://www.europeandatajournalism.eu/cp_data_news/eu-uses-development-aid-to-strongarm-africa-on-migration/ Schulz, Martin.“Speech at the Valletta Summit on Migration.” Speech. Valletta, Malta. November 11, 2015. European Parliament. https://www.europarl.europa.eu/former_ep_presidents/president-schulz-2014-2016/en/press-room/speech_at_the_valletta_summit_on_migration.html Shields Martin, Charles, Benjamin Schraven and Steffen Angenendt. “More Development- More Migration? The “Migration Hump” and its Significance For Development Policy Cooperation with Sub-Saharan Africa.” German Development Institute. 2017. https://www.idos-research.de/en/briefing-paper/article/more-development-more-migration-the-migration-hump-and-its-significance-for-development-policy-co-operation-with-sub-saharan-africa/ Silver, Laura. “Populists in Europe – especially those on the right – have increased their vote shares in recent elections.” Pew Research Center. October 6, 2022. https://www.pewresearch.org/short-reads/2022/10/06/populists-in-europe-especially-those-on-the-right-have-increased-their-vote-shares-in-recent-elections/ Sojka, Aleksandra. “Supranational identification and migration attitudes in the European Union.” BACES Working Paper no. 02-2021. Barcelona Center for Migration Studies, 2021. “Strategy for Security and Development in the Sahel.” European Union External Action Service. European Union External Action Service. Accessed March 14, 2024. https://www.eeas.europa.eu/sites/default/files/strategy_for_security_and_development_in_the_sahel_en_0.pdf Support to Integrated border and migration management in Libya – First Phase.” Emergency Trust Fund for Africa. Accessed March 14, 2024, Africa, https://trust-fund-for-africa.europa.eu/our-programmes/support-integrated-border-and-migration-management-libya-first-phase_en “Support to Integrated border and migration management in Libya – Second Phase.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/our-programmes/support-integrated-border-and-migration-management-libya-second-phase_en Tawat, Mahama and Eileen Lamptey. “The 2015 EU-Africa Joint Valletta action plan on migration: A parable of complex interdependence.” International Migration 60, no. 6 (21 December 2020): 28-42. https://onlinelibrary.wiley.com/doi/10.1111/imig.12953 Trust Fund Financials.” Emergency Trust Fund for Africa. Accessed March 14, 2024. https://trust-fund-for-africa.europa.eu/trust-fund-financials_en Tusk, Donald. “Valletta Summit on Migration.” Speech. Valletta, Malta. November 2015. European Council. https://www.consilium.europa.eu/en/meetings/international-summit/2015/11/11-12/ “Valletta Summit, 11-12 November 2015 Action Plan.” European Commission. Accessed March 14, 2023. https://www.consilium.europa.eu/media/21839/action_plan_en.pdf “What is the EU-Turkey deal?” Rescue. March 16, 2023. https://www.rescue.org/eu/article/what-eu-turkey-deal Zaun, Natascha and Olivia Nantermoz. “Depoliticising EU migration policies: The EUTF Africa and the politicization of development aid.” Journal of Ethnic and Migration Studies 49, no. 12 (May 2023): 2986-3004. https://www.tandfonline.com/doi/full/10.1080/1369183X.2023.2193711 Zaun, Natascha and Olivia Nantermoz. “The use of pseudo-causal narratives in EU policies: the case of the European Union Emergency Trust Fund for Africa.” Journal of European Public Polic y 29, no. 4 (February 28, 2021): 510-529. https://www.tandfonline.com/doi/full/10.1080/13501763.2021.1881583#:~:text=According%20to%20the%20Commission%20Decision,insufficient%20resilience%20to%20food%20crises'%2C “2016 Annual Report.” Trust Fund for Africa. 2016. https://trust-fund-for-africa.europa.eu/system/files/2018-10/eutf_2016_annual_report_final_en-compressed_new.pdf “2017 Annual Report.” Trust Fund for Africa. 2017. https://trust-fund-for-africa.europa.eu/document/download/1a5f88be-e911-4831-9c2a-3a752aa27f7e_en?filename=EUTF%202017%20Annual%20Report%20%28English%29.pdf “2018 Annual Report.” Trust Fund for Africa. 2018. https://trust-fund-for-africa.europa.eu/document/download/fb0737ce-3183-415a-905c-4adff77bfce3_en?filename=Annual%20Report%202018%20%28EN%29%20 “2019 Annual Report.” Trust Fund for Africa. 2019. https://trust-fund-for-africa.europa.eu/document/download/e340b953-5275-43e5-8bd3-af15be9fc17a_en?filename=EUTF%202019%20Annual%20Report%20%28English%29.pdf “2020 Annual Report.” Trust Fund for Africa. 2020. https://trust-fund-for-africa.europa.eu/document/download/4a4422e5-253f-4409-b25c-18af9c064ca1_en?filename=eutf-report_2020_eng_final.pdf “2021 Annual Report.” Trust Fund for Africa. 2021. https://trust-fund-for-africa.europa.eu/document/download/f3690961-e688-44de-9789-255875979c1b_en?filename=EUTF%202021%20Annual%20Report%20%28English%29 “2022 Annual Report.” Trust Fund for Africa. 2022. https://trust-fund-for-africa.europa.eu/document/download/f3690961-e688-44de-9789-255875979c1b_en?filename=EUTF%202021%20Annual%20Report%20%28English%29

  • In the Augenblick | brownjppe

    In the Augenblick, Not the Moment: A Heideggerian Critique of Temporal Inauthenticity Lukas Bacho Author Gabriel Gonzalez Alexander Gerasimchuk Matthew Wong Editors “Be in the moment!” In our chronically online and attention-deficient age, this admonition is a constant refrain. It usually means: “Focus on neither the past nor the future, but rather the present—what is happening right now. ” A favorite instruction of guided meditations, it may also be heard as a protest against the impulse to sully a beautiful view with a photo shoot. Too often, our minds are clouded by remorse for past events or anxiety about future events that we are unable to appreciate the present for what it is. Clearly, there is some truth to this. However, the normative force of “Be in the moment!” relies on the misleading descriptive claim that we are only ever in the moment (so why try to exist outside it?). This, in turn, rests on a conception of time as a series of punctual moments, as on a timeline, that seem linked only because we perceive them as such. Martin Heidegger had a name for this understanding: “now-time,” or the “ordinary” (Vulgär ) conception of time. I seek to argue, with Heidegger’s help, that the admonition to “be in the moment” obscures essential features of our temporality, thereby diminishing our potential for authentic living. My primary aim is to reconstruct Heidegger’s accounts of now-time, world-time, originary temporality, and the authentic mode of relating to all of these. What emerges is the foundation for a more authentic way of relating to time whose explanatory priority lies not in one’s present situation but in one’s future possibilities. Now-time (Jetzt-Zeit ) is the most proximal conception of time according to which we humans, as Dasein , live our lives. On this understanding, Heidegger writes, “time shows itself as a sequence of nows which are constantly ‘present-at-hand,’ simultaneously passing away and coming along. Time is understood as a succession, as a ‘flowing stream’ of nows, as the ‘course of time.’” The language of “sequence” and “succession” indicates that time is here understood as a series of discrete moments so short that their continual coming and going seems to constitute a flow, but in fact does not. Each moment, or “now,” is “present-at-hand” in the sense that it has an objective (and thus constant) duration. One can quibble about how long exactly the “now” is, but most would say a fraction of a second. Heidegger calls this conception “now-time” because it views time as nothing but these infinitesimally short “nows,” linked by nothing but one another; for “the sequence of nows is uninterrupted and has no gaps.” Now-time resembles a popular position in contemporary philosophy known as the cinematic or snapshot view of time, which holds that “neither our awareness itself nor its contents have temporal extension.” But now-time is also the idea we live by in our everyday lives, most obviously in our use of clocks. The convention of designating the current “now” with clock-time reflects our conception of time as a series of discrete moments: 4:17 comes after 4:16, the fourth second of a minute comes after the third, and so on. Heidegger emphasizes that now-time existed far before the invention of clocks, for Dasein has always measured its time, whether by the sun or some other means; the only difference is that the units of measurement have changed. If time were divided into sufficiently short instants, the logic goes, there would be nothing between them. Indeed, now-time is so integral to our everyday existence that it is hard to imagine any other way to conceive of time. Now-time is implicit in our telling someone to “be in the moment.” To show how, let us begin by acknowledging that the imperative asks one to exist in the present, at the exclusion of both the past and the future. What is the present? Although the word “moment” seems to leave the present’s length ambiguous—it could be a split second, or the multi-hour duration of an activity—the statement’s exclusion of the past and future actually requires that the “moment” be infinitesimally short. If one were to “be” in the next minute or even the next second—that is, anticipate or worry about what will happen then—one could not claim to be in the moment. Thus, the perception of time as a succession of constantly fleeting nows underlies “be in the moment.” But that is not all: the insistence upon the singularity of the moment betrays the idea that there is only ever one moment to be in. In fact, the moment has no duration: like the instants of now-time, the moment is a point . Thus, “be in the moment,” as a statement of now-time, objectifies the present in such a way that there is nothing significant about it except the fact that it is the present. Why should we be in the present? Because it is the present—because it is all that is. Heidegger complicates this picture by introducing the notion of world-time (Weltzeit ). If now-time is responsible for our sense of the present’s punctuality, world-time is responsible for our sense of the present’s universality, and is thus explanatorily prior to our conception of now-time. As Heidegger puts it, world-time is “that time ‘wherein’ entities within-the-world are encountered.” In other words, it is the kind of time that enables us to encounter things in the world. We can clarify what world-time is by examining its four constitutive aspects in turn: publicness, datability, spannedness, and worldhood. The most accessible of world-time’s four aspects is publicness (Öffentlichkeit ). Indeed, Heidegger often calls world-time “public time.” Publicness is the characteristic of world-time whereby we take ourselves to be in the same “now” as one another at any given time. Publicness allows me to say to another person, “Now it is twelve o’clock,” knowing that if they are in the same time zone, it is now twelve o’clock for them, too. If they are not in my time zone—if we are talking on the phone, for instance—I still understand that while it is currently another time for them, we are fundamentally in the same now . And publicness extends beyond the now: only because we understand time as public, as shared, as out there in the world, can we say that we “use,” “buy,” or “borrow” time. Publicness is perhaps the aspect of world-time that is least concealed in now-time, since the measurement of time with clocks and timeliness obviously presupposes that any quantified time will be intelligible as the same “now” by everyone. Still, the fact that we take ourselves to be in the same now remains hidden in now-time. We take for granted that “now” is simply now —that when one person says “be in the moment,” the other will know what time they mean. A second aspect of world-time is datability (Datierbarkeit ), the structure by which Dasein assigns a temporal structure to its experience. In practice, datability refers to our assignment of times to events and events to times, even “before” we impose the numerical values of now-time (like “November 8” or “9:15 a.m.”) on those events. For example, when we say “It is cold,” we mean “It is cold now ,” just as when we say “It was cold,” we mean “It was cold formerly .” Conversely, time has content for us, for “When we say ‘now,’ we always understand a ‘now that so and so.’” Although datability includes the word “date,” it has nothing to do with numerical dates. Instead, datability simply means that all that happens is happening at a time, and every time is a time when something is happening. Clearly, datability enables the conception of now-time, since interpreting time as a sequence of nows makes sense only if Dasein has an intuitive idea of its existence within a structure of past, present, and future. If Dasein could not date itself, time could not seem to be a “flowing stream,” since Dasein would not be fixed in relation to it. In this admittedly murky way, now-time reveals datability as a feature of world-time. Mostly, however, now-time covers up datability, for the now of now-time is not understood to be “now, when x ,” but rather simply “now.” This is especially glaring in “Be in the moment!” In the moment when you are doing what? The admonition suggests that the moment is a space where you need not do anything, when in fact every moment is always a moment when you are doing something. The aspect of world-time which may be most obscured by now-time is spannedness (Spanne ), which affords every “now” the property of duration. Heidegger introduces the concept of spannedness by observing that we understand there to be a length of time—not just a series of nows—between any “now” and a future “then.” This liminal length is itself datable with expressions like “during” and “meanwhile,” which shows that we can conceive of a future “now” (and by extension, any past or present now) with a duration we ourselves have determined. Spannedness accounts for how I can simultaneously say “Now I am writing,” “Now I am a student,” and “Now I am alive,” even though these nows are of vastly different lengths. In fact, no now to which we refer is ever punctual; every now is temporally extended. Even the clock, our paradigmatic instrument of now-time, reveals the spanned nature of world-time by designating as an hour an arbitrary number of minutes and as minute an arbitrary number of seconds. Seconds may be in turn divided into milliseconds, nanoseconds, and so on—there are infinite nows between one second and the next—though the clock does not show this directly. Assigning numbers to time requires that we pin down the now as if it were punctual, when in fact it is spanned. Much like a clock obscures the spannedness of seconds, the statement “be in the moment”—in its exclusion of anything that might be called past or future—obscures the spannedness of said moment, despite the fact that “moments” are by definition variable in length. The fourth aspect of world-time is worldhood (Weltlichkeit ), which situates every time in a normative structure of significance. As Heidegger explains, “The time which is interpreted in concern is already understood as a time for something. The current ‘now that so and so…’ is as such either appropriate or inappropriate .” He returns to the sun for a primitive example: depending on the context, the now of dawn is understood implicitly as the time for waking up or the time for going to work. The clock, as an instrument of now-time, obscures this aspect of world-time by seeming to give every “now” equal status. But it is in light of the worldhood of time that clocks are useful to us: 8:00, for example, is not just a string of numbers—“the time it is now”—but “the time for waking up,” or whatever the case may be. Moreover, the design of a clock—which assigns the hour and half-hour to the extreme points of its vertical axis, and the fifteen-minute intervals between these to its leftmost and rightmost points—reflects our taking certain numerical times to be more appropriate than others as times for anything. For instance, 9:00 is a more “appropriate” time than 9:03 or 9:10 not by itself, but rather for setting an alarm to wake up, holding a meeting, etc. The statement “be in the moment” similarly covers up the worldhood of time by suggesting that the moment is not “for” anything but itself. When someone leading a meditation says it, they want the one hearing to “forget” that they have made the moment significant as a moment for meditating. When a photo-averse person says it, it is because they have designated the moment as a moment for enjoying the scenery, not a moment for taking photos. The worldhood of time entails that by doing anything, I am implicitly asserting that now is the right time to do it. As we have seen, the admonition to “be in the moment” covers up all four aspects of the kind of time (world-time) from which we derive our ordinary conception of time (now-time). Yet Heidegger shows us that world-time is in turn explicable only by an even more basic kind of time, originary or primordial (ursprünglich ) time. Primordial time is the kind of time that Heidegger has been working to uncover throughout Being and Time ; it is “the condition which makes the everyday experience of time both possible and necessary.” In Division II, Chapter 6, he gets primordial time into view by observing that Dasein is not just Being-towards-the-end (i.e., death), but also Being-towards-the-beginning (i.e., birth). To see this, we need not look further than Dasein’s characteristic activity of thrown projection, by which Dasein claims the circumstances it has been thrown into from birth , even as it reinterprets them by projecting its own possibilities until death. Because of the bidirectional gaze of thrown projection, “Dasein does not exist as the sum of the momentary actualities of Experiences which come along successively and disappear.” In other words, Dasein does not exist exclusively in now-time, for Dasein is not just the sum of its experiences at a series of present-at-hand nows. Rather, Dasein is also its past circumstances and future possibilities. As Heidegger puts it, Dasein “is stretched along and stretches itself along ” primordial time via its own activity. The scope of primordial time is Dasein’s entire lifetime, without which the four aspects of world-time could not exist. The now could not be public, datable, spanned, or worldly without the finite being that discloses the now as public, dates the now, relates the now to the broadest now of its own life, and renders the now a time for something in light of its finitude. Therefore, primordial time is the kind of time that makes Dasein a whole and undergirds its Being as care (cf. ). Encouraging someone to “be in the moment” obfuscates primordial time, thereby exemplifying an inauthentic relation to time that Heidegger calls “making-present” (gegenwärtigen ). Making-present describes a state of “falling into the ‘world’ of one’s concern”—the everyday realm where Dasein’s perspective is confined to the objects it encounters as equipment for fulfilling immediate ends. In making-present, Dasein’s attention becomes myopic: it seems to forget its Being as thrown projection, which is to say it forgets that it goes about all its everyday tasks in the context of broader priorities. Of course, the most global context Dasein forgets in making-present is its own finitude, in virtue of which all its priorities matter. The imperative to “be in the moment” epitomizes making-present because it disallows making sense of what one is doing now in light of anything futural; thus, it stands opposed to the maxim “live every day as if it were your last,” even though similar sentiments may motivate the two statements. To “be in the moment” is to forget not only that one has priorities, but also that everything one does is an implicit articulation of those priorities. Consequently, one’s experience of time becomes “an inauthentic awaiting of ‘moments’—an awaiting in which these are already forgotten as they glide by.” Time seems never to arise, but only to pass away; one conceives of oneself not as stretching oneself along time, but rather passively lost in its flow. What making-present makes present, then, is primordial time itself, whose past and futural aspects are subjugated to the cult of the present “moment.” Heidegger reveals our potential for a more authentic relation to primordial time and world-time by contrasting making-present with his concept of the Augenblick , in which Dasein recognizes its past, present, and future as inseparable aspects of its own wholeness. The Augenblick is “the resolute rapture with which Dasein is carried away to whatever possibilities and circumstances are encountered in the Situation as possible objects of concern.” Bearing in mind both its possibilities (projection) and its circumstances (thrownness), Dasein does not lose sight of its priorities amid the world of its concern, but sees those priorities themselves as objects of concern to be constantly actualized and reevaluated. In the Augenblick, Dasein understands its Being as care and itself as finite, but not in such a way that it is afraid of its own death; its rapture is resolute , at once unflinching in its acknowledgment of mortality and steadfast in its commitment to living. The Augenblick is an “ecstasis” in the sense that it allows Dasein to stand outside the world of its concern—outside the punctual present of now-time—and grasp world-time and primordial time, if only implicitly, as the grounds of its temporal experience. Crucially, the Augenblick does not mean an escape from the present—where all experience occurs—but rather expands the present to include one’s whole life. If we translate it as “moment,” we had better bear in mind the English word’s other meaning of “importance,” from which we get “momentous.” The Augenblick renders the present important—i.e., consequential—precisely by being the “gaze of the eye,” for it is in the present (both right this second and during one’s life ) that one judges practically what is worth attending to by focusing on certain things rather than others. The Augenblick, then, could not be more different from the “moment” of “be in the moment,” for while the former imbues the now with momentous stakes by maximally dilating it, the latter deflates the stakes of the now by maximally contracting it. Even the English word “moment” obscures the essential relation between Dasein and time, whereas the German word Augenblick identifies Dasein’s caring activity—its gaze—as the precondition for temporal experience and Dasein’s sense of continuity from one moment to the next. It is an inevitable consequence of the Augenblick’s expansion of the now that the future acquires explanatory priority over the present in the question of Dasein’s Being. While the inauthentic understanding of one’s potentiality-for-Being “temporalizes itself in terms of making present,” Heidegger observes, the Augenblick does so “in terms of the authentic future.” This means that while making-present confines the implications of one’s activity to the punctual now of now-time, the Augenblick discloses those implications as primarily futural. Thus, the Augenblick is explicable not in terms of the vulgar “now” (dem Jetzt ), but in terms of future possibilities: as the “gaze of the eye,” it “permits us to encounter for the first time what can be ‘in a time’ as ready-to-hand or present-at-hand.” In the Augenblick, Dasein discovers itself in the equipment that constitutes the world of its concern, which in turn leads it to recognize that it is the one responsible for stretching oneself along and projecting itself toward certain possibilities rather than others. The worldhood of time becomes particularly apparent, for the current “moment” no longer seems trivial; every “now” becomes significant in terms of what it is a time for , which is to say in terms of its bearing on the future. So while “be in the moment” suggests that the present is all that matters, the Augenblick insists that the present matters only because the future does. In Heidegger’s categories of inauthenticity and authenticity we find the foundation I promised for a more authentic way of relating to time. “Be in the moment” exemplifies an inauthentic mode of relating to time—i.e., making-present—that obscures world-time and primordial time as the fundamental structures of our experience. To be in the Augenblick, on the other hand, is to relate to the now authentically : it means to own up to the present as datable, public, spanned, and worldly; and to understand it as inseparable from the past and future. In the inauthentic mode, one is lost in one’s immediate concerns rather than seeing the “big picture,” and passively awaits the future rather than owning it as the ground of one’s priorities. Thus, although “be in the moment” seems to inflate the status of the present, it actually diminishes the present into a kind of hollow shell. But in the authentic mode, one owns up to both the past and future—stretching back to one’s birth and forward to one’s death—as constitutive of who one is and what one does. Heidegger’s authenticity is proto-ethical in that it denotes appropriation of one’s own temporality as the ground of one’s reasons for doing this rather than that in any given case. Yet authenticity is not fully ethical, for while it describes a relation to one’s reasons (the “subjective ought”), it fails to prescribe specific actions (the “objective ought”). The extent to which one could derive the latter from the former is doubtful, at least within the framework of Being and Time. Yet authenticity, if proto-ethical, is far from irrelevant. We could retort that relating authentically to time is further than most people get in life—never mind living ethically. By saying things like “be in the moment,” we evacuate ourselves from the now, only to reinsert ourselves in it as passengers. We say that time is a flowing stream, forgetting that we are the ones stretching ourselves along. At worst, we pretend indifference, when in fact—as being in the Augenblick reminds me—there is nothing more fundamental to our experience than that we care. References Hägglund, Martin. “Lecture 25: Now-Time, World-Time, and Originary Temporality.” Lecture. PHIL 402: Being and Time, Yale University, 24 April 2024. Heidegger, Martin. Being and Time . 1927. Translated by John Macquarrie and Edward Robinson, Harper Perennial, 2008. Phillips, Ian, editor. The Routledge Handbook of Philosophy of Temporal Experience . Routledge, 2017.

  • Bess Markel

    Bess Markel Rural Despair and Decline: How Trump Won Michigan in 2016 Bess Markel Introduction When Donald Trump won the Electoral College vote in 2016, he shocked the entire world. In part, people believed he could never win because he would never crack the Democrats’ famous “Blue Wall”: the combination of Michigan, Wisconsin, and Pennsylvania. But he did, winning counties that John McCain and Mitt Romney could not. Political pundits asked themselves: How did this entitled, brash, inexperienced New York millionaire appeal to rural voters? What seems like thousands of think-pieces have been written on the issue, each suggesting that Trump won the election because of Russian interference, deeply rooted misogyny, racial backlash to Obama’s presidency, the rise of social media, and a myriad of other factors. However, some scholars have suggested that Trump’s unexpected triumph could be traced to another factor: pain and discontentment across rural America. Over the past several decades, America’s working class has seen its way of life disappear. With a loss of jobs due to innovative technology, outsourcing of manufacturing jobs, and mass migration out of Rust Belt states, residents of Illinois, Indiana, Michigan, Ohio, Pennsylvania, West Virginia and Wisconsin—areas that used to be vibrant parts of America’s heartland—feel left behind (1). Some believe that this downward trajectory helped spark the rise of Trump. While this might seem counterintuitive at first because Trump is viewed by the liberal media as an uncaring, East Coast elite, scholars have strived to understand the appeal of Trump among working-class, white voters, particularly in the Midwest. This line of research is particularly important for the future of electoral politics. The movement Trump’s election sparked, and even Trump himself, are not going away anytime soon. Understanding Trump’s appeal can explain his continued support and how other candidates can seize upon the movement he built. This research paper will explore the connection between despair and the rise of Donald Trump. It will use data on unemployment, education levels, and levels of drug- and alcohol-related deaths and suicide taken from the swing state of Michigan, which narrowly helped Trump win the 2016 election. The data section will show that Trump performed better in places where residents seemed more likely to feel economically and socially left behind. However, we must first classify and understand what scholars mean when they discuss despair in rural, working-class America. Understanding Rural Support for Trump Political scientist Katherine Cramer, in her book Politics of Resentment, argues that rural politics can be understood as stemming from the creation of a rural community consciousness, rooted in resentment toward “elites” and urbanites (2). Through interviews with a group of locals in rural Wisconsin, Cramer discovered what rural consciousness looks like and defines it as three sentiments: caring about perspectives of power, primarily that urban areas have all of it and rural areas have none; respect for the “rural way of life”; and the perspective that too few resources are allocated to rural areas (3). All in all, her definition of rural community consciousness paints a picture of rural Americans feeling that urban Americans, and by extension government officials who are overly influenced by urban values, have no respect for their way of life, and are draining rural resources and livelihoods through welfare programs and other legislative efforts that advantage urban areas. This perceived allocation of resources toward exclusively urban areas is not actually the case, and many government funds and programs target rural areas. However, governmental bodies do not always prioritize marketing their budget allocations and as a result many rural communities are uninformed about the inner workings of the political system. In addition, because rural Americans are disillusioned, they have low desire to learn about government activities, leading these causal beliefs to go unchallenged and unresearched (4). This breakdown in communication and understanding has long-reaching effects. The Republican Party has seized upon these sentiments to further its agenda. Many rural Americans are wary of governmental employees and programs that they view as elite and guilty of stealing rural money for personal or political gain (5). Journalist Thomas Frank understands the power of rural resentment but argues that it is not necessarily about a lack of understanding of budgetary inner workings or community anger but is rather about character assessment. He asserts that while resentment and hostility are important in understanding the rural vote, the most crucial factor is actually authenticity.6 Republicans, he argues, have successfully rebranded Democrats as out-of-touch city elites worthy of scorn. Even as Republican political figures push legislation that hurts working-class Americans, they successfully market themselves as relatable, the politicians that a voter would want to have a beer with (7). This authenticity wins them voters despite their lack of concrete political achievements for lower-income, working-class Americans. When putting these scholarly findings in conversation with the campaign message on which Trump ran, it is easy to see how in 2016 Trump played upon the resentment and despair of rural areas by framing the Democrats, and by extension the urban liberal elite, as the cause for all problems. Throughout the campaign, Trump had a habit of saying exactly what was on his mind, perhaps giving him an air of over-a-beer-authenticity and relatability—he certainly appeared honest due to his unfiltered dialogue. Moreover, his lack of political experience likely worked in his favor in areas where government officials are seen as untrustworthy. By contrast, his opponent, Secretary of State Hillary Clinton, had held governmental office at various levels for many years, which played into Trump’s painting of her in the eyes of many rural communities as an urban elite living off the people’s hard earned money. Further complicating the relationship between rural and urban areas is the perception that urban areas are more liberal or have different ways of life. Sociologist Jennifer M. Silva argues that some rural voting patterns can be explained by the considerable amount of fear that exists in many of these places, both around shrinking economic opportunities and the general future of communities (8). This fear often manifests itself as a feeling that America must return to “disciplined values” such as hard work, or worries that immigrants are stealing all the well-paying jobs. In his campaign, Trump certainly identified these fears (9). This could be seen in his harsh anti-immigrant rhetoric, seemingly placing the blame on them for the lack of decent-paying jobs. Trump also emphasized his skills as a businessman, arguing that they would help him run the country and increase the job market. Many authors believe that Trump’s strong performance in rural counties can be explained by the “landscapes of despair” theory, arguing that all of the areas in which Trump over-performed or Clinton underperformed have experienced immense social, economic, and health declines over the past several decades. These authors believe that Trump appeals to voters who are not necessarily the poorest in America but whose lives are worse off than they were several decades ago (10). Trump spoke to that pain and offered these Americans a message that appealed to their despair. Goetz, Partridge, and Stephens find that economic conditions have changed over time throughout rural communities, with urban centers becoming more prominent and fewer agricultural jobs available. However, they find that not all rural areas are doing uniformly poorly economically across America. Instead, there has been “profound structural change” in most of these areas in terms of the types of employment available (11). This structural change could contribute to the feeling among many rural Americans of having been left behind and could also explain some of the draw to Trump’s nationalism, as trade and increased globalization, along with new technology, have contributed to this extreme change (12). Goodwin, Kuo, and Brown agree with this theory and find a correlation be- tween higher rates of opioid addiction in a county and the percent of the county that voted for Trump (13). They found that opioid addiction is one way to measure the sociocultural and economic factors that often created support for Trump and noted that simple unemployment measurements fail to capture this same trend. These two pieces of data imply that the voting patterns of Trump’s supporters do not correspond with being worse off economically than the rest of America, but rather are related to whether people personally feel like they and their communities are backsliding, with opioid addiction as an indicator of this attitude. Gollust and Miller argue that the opioid crisis triggered support for Trump, not necessarily because it is a measurement of sociocultural factors within communities, but because it triggered a comparison in the minds of people living in communities where the crisis was rampant (14). Through experimentation, Gollust and Miller found that Republicans and Trump supporters were more likely than Democrats to view whites as the political losers in the country (15). It is easy to see how Trump’s aggressive rhetoric appealed to people who felt like they were losing and that they needed a fighter to advocate for them. Journalists and Political Research Associates Berlet and Sunshine believe that Trump’s rise can be attributed to changing ways of life and Trump’s connection to right-wing populism (16). They argue that there was a rise in the notion that the white-Christian-heterosexual-American way of life is “under threat” in the years preceding the election. They believe that Trump’s brash candidness, his willing- ness to invoke Islamophobia, homophobia, and xenophobia, and his appeals to Christianity and the patriarchy tapped into a deep-simmering rage that had been growing among rural people (17). In this way, white racial antagonism contributed to Trump’s success. Rural Americans redirected their despair into rage toward those individuals and collectives that they perceived as a threat to their way of life. This argument is heavily focused on the effects of bigotry and anger on people’s voting choices, whereas several other authors, such as Cramer and Frank, believe that rural support for Trump was much less rage-based and much more about a lack of trust in government and the feeling of being neglected for years. We believe that the theories of despair and feelings of backsliding can explain some of the trend toward rural support for Trump in 2016. We believe that data will show that the most important despair factors depict not how badly off a community was in 2016, but rather the comparative: how much worse off it was in 2016 compared to several decades earlier. Finally, we agree with Cramer’s theory of rural consciousness and feel that it may have played a role in general distrust for Clinton as a candidate, but found it impossible to test those attitudes given the data available. Methodology To test the effect of rural pain and despair in connection to GOP voting share in Michigan, we used data at the county level, primarily from 2016, which came from the United States Census Bureau and the Institute for Health Metrics and Evaluation (18). We focused specifically on the 2016 election because of the connection between Donald Trump’s share of the vote and struggling rural voters, which was higher than previous GOP candidates Mitt Romney and John McCain (19), as well as Trump’s reputation as an outsider (20). We chose to look at data from all Michigan counties, regardless of which candidate the county voted for or whether the county flipped parties between 2012 and 2016. We chose not to look exclusively at flipped counties because Trump flipped only twelve counties in Michigan. In order to obtain a statistically significant and unbiased result about the effects despair factors had on county result data, more than twelve data points were needed. We instead measured the effect of despair factors on the vote share that Trump received in each county in 2016. We defined six despair factors to represent the challenges and pains each county faced at the time of, or leading up to, the 2016 election. The first three of these factors are defined as the percent change in age-standardized mortality rates be- tween 1980 and 2014 for the following: alcohol use disorders, drug use disorders, and self-harm injuries ( alcoholchange , drugchange , and selfchange , respectively). The source from which we obtained information on drug use disorder–related fatalities did not provide a breakdown by substance so we are unable to determine how much of this factor can be attributed to the ongoing opioid crisis. However, due to the sweeping nature of the crisis, particularly in rural working-class communities, we believe there is some relationship between the drug-use-disorder mortality rate and the opioid crisis (21). Factors that measure changes in living conditions over time, such as changes in fatal overdoses, alcohol deaths, and suicides, will test whether despair is truly about voters’ communities becoming worse than they were before. The fourth despair factor ( undereducated ) represents the education level of each county, using the percent of adults over 18 whose highest educational attainment in 2014 was a high school degree or less. This is an important factor to examine while exploring despair because lower levels of education limit career and income options and are often correlated with greater instances of feeling trapped or stuck in a community (22). The final two factors in the exploration are unemployment and the percentage of the county population that died of any cause in 2016. This last factor is important to consider because higher death rates often show that a county has an aging population and can accordingly suggest that younger people are choosing to leave. If the theories described above are true, unemployment should not matter as much because the “landscapes of despair” theory focuses on decline in communities and in economic opportunities, meaning many voters could be employed but working longer hours, harder jobs, getting paid less, or feeling like they have fewer opportunities than they once did. To test this we ran the same statistical analysis for unemployment but specifically looked at whether the variable was statistically significant in predicting the Trump vote. If it was not, that would prove Goodwin, Kuo, and Brown’s theory that unemployment is not the best measure of Trump’s support in 2016. We used two statistical methods of analysis. First, we used histograms to compare a single despair factor, such as percent change in alcohol use disorder–related deaths, against the way that the county voted in 2016 to see if certain factors of despair disproportionately affected one party’s vote share. Second, we used the regression equation below to test our hypothesis that the six aforementioned despair factors led to Trump’s higher vote share in Michigan. Finally, we analyzed the despair factors individually to show their discrete effects on the GOP vote share in Michigan’s 2016 election. TRUMPSHAREi = β0 + β1DRUGCHANGEi + β2ALCOHOLCHANGEi + β3SELFCHANGEi + β4UNDEREDUCATEDi +β5UNEMPLOYMENTi +β6PRCNTDEATHSi +εi We fit the model using the county-level data we gathered to examine whether the test statistic led us to reject or fail to reject the null hypothesis that there is no relationship between these six despair factors and the percent of the vote share that Trump received in Michigan in 2016. We also used the R-squared from this regression to determine how strong the linear relationship of the regression equation was. Results and Discussion The results we found conclusively show that we can reject the null hypothesis that there is no relationship between the six despair factors and Trump’s success in a county. The first statistical method we used was comparing histograms of each factor broken down by party identification. We found graphical evidence to suggest that death rates, suicide, undereducation, and unemployment were disproportionately higher in Republican counties. Changes in alcohol and drug deaths ( alcoholchange and drugchange ) did not seem to be strongly correlated with one specific party, though counties that voted very strongly for Republicans did seem to have the highest values (highest percent changes from 1980 to 2014) for both of these variables. In some respects, the fact that many of these despair factors were higher in counties that voted Republican makes sense. By 2016, Barack Obama had been president for eight years, and often people who are unhappy with how the economy has been faring or who are unemployed vote for the candidate from the opposite party of the sitting president. However, large values of other factors, such as percent change in deaths from self-harm, are more alarming, as these first three variables measure changes dating back to the 1980s. We were surprised that alcohol and drug deaths seemed to be more evenly spread out between the parties than self-mortality, which was particularly unexpected due to the amount of literature on the correlation between those affected by the opioid epidemic and votes for Trump (23). Perhaps Goodwin, Kuo, and Brown’s theory that increased opioid usage is a good instrumental variable for Trump support still holds because this data only looks at drug mortality, not drug use. It is entirely possible that Trump counties have higher drug use, but we could not make a conclusion based on the data (24). However, due to the large percentage of drug overdoses that can be attributed to the opioid crisis, it is surprising that more of Gollust and Miller’s and Goodwin, Kuo, and Brown’s theories did not seem to be supported in this data set (25). The 3-D graphs in the appendix look at the relationships between the vote share that Trump received and percent changes in alcohol, drug, and self-harm mortality rates (26). The regression planes on these 3-D graphs show that percent change in self-harm mortality is the only variable with a clearly positive relationship to Trump votes. The other two changes in mortality variables have weaker linear relationships with Trump votes in part due to several county outliers. Exploring those outlier counties more and investigating why specifically they might not follow the common trend would be an interesting topic for ethnographic research. When we ran the regression analysis the first time, we included all six of the variables we categorized as measures of despair. We also ran the regression analysis with different combinations of these variables to see if we could increase the adjusted R-squared variable, which shows the accuracy of adding another variable to the model. We found that the model was most accurate when we excluded the unemployment value, and because its t-test statistic was not statistically significant, we made the decision to exclude it from the final regression we ran in order to have a more accurate model. At first, we were surprised that unemployment was not significant in the model; however, this seems to support the theory that many “despair voters” do have jobs—they are just low paying and highly stressful (27). This supports Goodwin, Kuo, and Brown’s analysis that the unemployment level is not a good measurement alone of whether a county voted heavily for Trump. More- over, the histogram shows that high levels of unemployment are not necessarily correlated with high percentages of the vote going to Trump. Clearly, there are other factors at play that this statistic fails to capture, and unemployment could be an incomplete benchmark for despair because it does not measure satisfaction in jobs nor whether a job pays a living wage. Overall, we found that a model with the five factors of despair besides unemployment gave an R-squared of .552, meaning that 55% of the variance among the percentage of votes Trump won in a certain county could be attributed to these factors alone. This is remarkably high considering that neither policies nor previous voting records were added into this regression. However, the only variables that were found to be statistically significant on their own were percent changes in self-harm deaths and percent of undereducated voters. We were surprised that percent changes in alcohol and fatal drug overdoses were not more significant than changes in self-harm deaths, but again, that could be partially attributed to the fact that the data only measures overdoses rather than frequency of use. While one would assume that there would be a positive representative relationship between the two, it is hard to know for sure. However, we can say that, on average, increases in despair in certain aspects of life are correlated with an increase in support for Trump in the 2016 election, supporting the original hypothesis of this paper that rural despair played into Trump’s win in Michigan in 2016. However, we fail to find definitive conclusions regarding some of the connections drawn in previous scholarly literature between opioid overdose and the Trump vote. Perhaps the most striking analysis is running the same regression but with Democratic vote share in the 2016 election and comparing the results with those from the Republican vote share. As seen in the table below, the coefficients for each variable nearly flip signs. A decrease in suicide-, alcohol-, and drug-related deaths, or other despair factors, can be expected on average to be associated with a positive increase among the percentage of the county “voting blue.” Counties that vote Democratic, at least on average, tend to have had some sort of positive change, on the individual or communal level, around certain measures of despair (28). This does not mean that Clinton voters were necessarily better off than all Trump voters across Michigan, but rather that Clinton voters had seen their lives improve, if only marginally, and Trump voters had not. Theories of despair regarding rural voters do not compare the lives of rural voters to those of voters in other areas of the state but rather investigate whether rural communities are worse off than they were several decades ago. Similarly, just because certain counties have seen an improvement in certain despair factors does not mean that their communities are not also grappling with alcohol, drug, and mental health issues. Additionally, better-educated counties tend to vote Democratic, with less-educated counties voting Republican. This is a reversal of certain historical trends (29). Again, at some level it is logical that voters who are doing better vote for the party that has been in power for the past several years. However, the data in these studies capture decades of crumbling communities. There is a downward trend in these communities in terms of levels of despair that shows that regardless of which party these counties vote for (whether they vote for the opposite party when they feel dissatisfied with the current one, or for the same party when things seem to be going well), neither party has been able to stop the 34-year trends of increases in suicide-, drug-, and alcohol-caused deaths. This validates theories of “rural consciousness” and “rural despair” by Cramer and Goetz, Partridge, and Stephens that rural communities clearly see and feel suffering in their communities and perceive a lack of attention and resources given to them (30). One could also argue that this supports Silva’s theory that many rural communities fear for their futures based on the downward spiral these communities have experienced for several years or decades (31). This fear could motivate voters to act more drastically or to believe that a massive change is necessary. In the voting booth, this could lead to their voting for a more unconventional candidate. Trump’s main slogan was “Make America Great Again,” suggesting that, at some level, he understood and was trying to court those experiencing this sense of despair. For many voters, America is the best it has ever been: we have unprecedented levels of rights and acceptance for women, minorities, and members of the LGBTQ+ community. Going back seems like regression, not progress. But as shown by this data, many of the counties that voted for Trump in 2016 were better off by certain metrics in 1980 than they are now. It makes sense that residents of these counties could be worried about the continuing decline of their communities and could want to go back to a better time and quality of life. Not to mention, according to Cramer’s thesis of rural consciousness, voters in rural Michigan could be very distrustful of any type of governmental employee promising change. Trump’s brand as a businessman with no prior political experience could have especially appealed to those affected by rural-consciousness thinking. His role as an outsider was relatable. His phrase “drain the swamp” directly spoke to the prevailing belief in these communities that Washington, DC is full of people who take taxpayers’ money and waste time. His opponent had been in the public eye for years in various government positions and was by extension seen, and marketed by conservative news outlets, as the leader of the “liberal elite.” Particularly in contrast with her, Trump could have seemed particularly appealing to those rural voters. The data we found strongly supports Cramer’s thesis that rural despair and resentment led to the crumbling of the Blue Wall. In order for Democrats to rebuild their former strongholds in these states, the party must examine the real pain and anger that many rural voters experience. They need to understand the hopeless- ness people are feeling and recognize why Trump specifically appeals to them. Trump, and the Republican Party, have been strategic in tapping into the anger, fear, and pain that rural voters feel. Democrats contributed to the phenomenon of rural consciousness and the belief that Democrats are coastal elites who neither care about nor understand middle America (32). Clinton and other Democrats have made several public missteps, including making fun of these voters, that have further reinforced this idea. Trump has succeeded in directing rural voters’ anger and mistrust toward the government, specifically bureaucracy and governmental programs that could actually help rural areas. Overall, Democrats need to strengthen their relationship with white working-class voters, and understanding rural despair and consciousness might be the first step to doing so. They need to consider creating messages that specifically address and appeal to rural voters and find and support candidates who can connect with them. To win back rural voters, Democrats also need to focus on messaging in rural America. That includes creating programs that provide resources and relief to these struggling areas, but also, perhaps more importantly, it requires making sure that rural communities are aware of these resources. If rural communities still view government as ineffective and uninterested in their problems, these programs will not be sufficient. It will take significant effort and messaging on behalf of Democrats to convince enough voters that the Democrats’ party, not Trump’s, actually represents rural Americans’ best interests. While President Biden managed to do this in 2020, very narrowly, it remains to be seen whether other Democratic candidates will be able to or will even want to capitalize on this messaging. It also remains to be seen which candidates will seem authentic to rural voters—clearly this was a big factor in Trump’s victory and was maybe an even bigger factor contributing to Clinton’s loss. Going forward, the Democrats will need to support candidates who can reach rural voters effectively and authentically, which remains a tall order. While Trump, not establishment Republicans, created a new coalition that drew on rural pain and despair, it would be naive to assume that the Republican Par- ty will not continue to take advantage of rural despair to win elections. Since Trump’s defeat, the messaging of the Republican Party has remained largely the same as when Trump was in office. If Democrats do not devote resources to successfully addressing these voters, they will have to accept the possibility that their once reliable Blue Wall will fall again or will never be rebuilt, and they will need to find another sizable coalition of voters to target in order to win elections at every single level. Appendix Endnotes 1 Pottie-Sherman, “Rust and Reinvention,” 2. 2 Cramer, The Politics of Resentment, 11. 3 Ibid., 54. 4 Cramer and Toff, “The Fact of Experience.” 5 Cramer, Politics of Resentment, 127. 6 Frank, What’s the Matter with Kansas?, 113. 7 Thomas, What’s the Matter with Kansas?, 119. 8 Silva, We’re Still Here, 45. 9 Inglehart and Norris, “Trump and the Populist Authoritarian Parties.” 10 Monnat and Brown, “More than a Rural Revolt.” 11 Goetz, Partridge, and Stephens, “The Economic Status of Rural America in the President Trump Era and Beyond,” 101. 12 Ibid, 117. 13 Goodwin et al., “Association of Chronic Opioid Use With Presidential Voting Patterns in US Counties in 2016,” e180450. 14 Gollust, and Miller, “Framing the Opioid Crisis: Do Racial Frames Shape Beliefs of Whites Losing Ground?” Journal of Health Politics, Policy and Law 45, no. 2 (April 2020): 241-276. 15 Gollust, and Miller, “Framing the Opioid Crisis: Do Racial Frames Shape Beliefs of Whites Losing Ground?” 16 Berlet and Sunshine, “Rural Rage,” 480–82. 17 Ibid, 490. 18 Foster-Molina and Warren, Partisan Voting, County Demographics, and Deaths of Despair Data. 19 Monnat, “Deaths of Despair and Support for Trump in the 2016 Presidential Election.” 20 Cramer, Politics of Resentment, 127-137. 21 Florian Sichart et al., “The Opioid Crisis and Republican Vote Share.” 22 Autor, Katz, and Kearney, “The Polarization of the U.S. Labor Market.” 23 Goodwin et al., “Association of Chronic Opioid Use With Presidential Voting Patterns in US Counties in 2016,” e180450. 24 Ibid. 25 Imtiaz et al., “Recent Changes in Trends of Opioid Overdose Deaths in North America.” 26 Created with the help of Ella Foster-Molina. 27 Torraco, “The Persistence of Working Poor Families in a Changing U.S. Job Market.” 28 We do not mean to suggest that Democratic voters do not face their own share of struggles, rather that this data on average suggests that counties that voted Democratic were less affected by these specific measures of despair in 2016. 29 Harris, “America Is Divided by Education.” 30 Goetz, Partridge, and Stevens, “The Economic Status of Rural America in the President Trump Era and Beyond.” Applied Economic Perspectives and Policy 40, no. 1 (February 16, 2018). 31 Kim Parker et al., “Similarities and Differences between Urban, Suburban and Rural Communities in America.” 32 Cramer, Politics of Resentment, 127-137. Bibliography Autor, David H, Lawrence F Katz, and Melissa S Kearney. “The Polarization of the U.S. Labor Market.” American Economic Review 96, no. 2 (April 1, 2006): 189–94. https://doi.org/10.1257/000282806777212620. Berlet, Chip, and Spencer Sunshine. “Rural Rage: The Roots of Right-Wing Populism in the United States.” The Journal of Peasant Studies 46, no. 3 (April 16, 2019): 480–513. https://doi.org/10.1080/03066150.2019.1572603. Cramer, Katherine J. The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker . Chicago Studies in American Politics. Chicago: University of Chicago Press, 2016. Cramer, Katherine J., and Benjamin Toff. “The Fact of Experience: Rethinking Political Knowledge and Civic Competence.” Perspectives on Politics 15, no. 3 (September 2017): 754–70. https://doi.org/10.1017/S1537592717000949. Florian Sichart, Jacob Chapman, Brooklyn Han, Hasan Younis, and Hallamund Meena. “The Opioid Crisis and Republican Vote Share.” LSE Undergraduate Political Review , February 13, 2021. https://blogs.lse.ac.uk/ lseupr/2021/02/13/the-opioid-crisis-and-republican-vote-share/. Foster-Molina and Warren. Partisan Voting, County Demographics, and Deaths of De- spair Data , February 2019. Frank, Thomas. What’s the Matter with Kansas? How Conservatives Won the Heart of America. 1st ed. New York: Metropolitan Books, 2004. Goetz, Stephan J, Mark D Partridge, and Heather M Stephens. “The Economic Status of Rural America in the President Trump Era and Beyond.” Applied Economic Perspectives and Policy 40, no. 1 (March 2018): 97–118. https://doi. org/10.1093/aepp/ppx061. Goodwin, James S., Yong-Fang Kuo, David Brown, David Juurlink, and Mukaila Raji. “Association of Chronic Opioid Use With Presidential Voting Pat- terns in US Counties in 2016.” JAMA Network Open 1, no. 2 (June 22, 2018): e180450. https://doi.org/10.1001/jamanetworkopen.2018.0450. Harris, Adam. “America Is Divided by Education.” The Atlantic, November 7, 2018. https://www.theatlantic.com/education/archive/2018/11/educa-tion-gap-explains-american- politics/575113/. Imtiaz, Sameer, Kevin D. Shield, Benedikt Fischer, Tara Elton-Marshall, Bundit Sornpaisarn, Charlotte Probst, and Jürgen Rehm. “Recent Changes in Trends of Opioid Overdose Deaths in North America.” Substance Abuse Treatment, Prevention, and Policy 15, no. 1 (December 2020): 66. https://doi. org/10.1186/s13011-020-00308-z. Inglehart, Ronald, and Pippa Norris. “Trump and the Populist Authoritarian Par- ties: The Silent Revolution in Reverse.” Perspectives on Politics 15, no. 2 (June 2017): 443–54. https://doi.org/10.1017/S1537592717000111. Kim Parker, Juliana Horowitz, Anna Brown, Richard Fry, D’Vera Cohn, and Ruth Igielnik. “Similarities and Differences between Urban, Suburban and Rural Communities in America.” Pew Research Center’s Social & Demographic Trends Project. Pew Research Center, May 22, 2018. https:// www.pewresearch.org/social-trends/2018/05/22/what-unites-and-di- vides- urban-suburban-and-rural-communities/. Monnat, Shannon M. “Deaths of Despair and Support for Trump in the 2016 Presidential Election,” 2016. https://doi.org/10.13140/RG.2.2.27976.62728. Monnat, Shannon M., and David L. Brown. “More than a Rural Revolt: Landscapes of Despair and the 2016 Presidential Election.” Journal of Rural Studies 55 (October 2017): 227–36. https://doi.org/10.1016/j.jrur- stud.2017.08.010. Pottie-Sherman, Yolande. “Rust and Reinvention: Im/migration and Urban Change in the American Rust Belt.” Geography Compass 14, no. 3 (December 7, 2019). https://doi.org/10.1111/gec3.12482. Silva, Jennifer M. We’re Still Here: Pain and Politics in the Heart of America . New York, NY: Oxford University Press, 2019. Torraco, Richard J. “The Persistence of Working Poor Families in a Changing U.S. Job Market: An Integrative Review of the Literature.” Human Re- source Development Review 15, no. 1 (March 2016): 55–76. https://doi. org/10.1177/1534484316630459. Previous Next

  • The Pay Gap Among Academic Faculty for Higher Education in the U.S | brownjppe

    The Pay Gap Among Academic Faculty for Higher Education in the U.S Yucheng Wang Author Aditi Bhattacharjya Jason Fu Meruka Vyas Editors Abstract This paper investigates whether academic rank, academic field, and gender account for the pay disparity in higher education in the United States. Analyzing 2,235 faculty in the University of Iowa, I find that pay gaps are primarily driven by academic rank, especially among professors and non-tenured faculty. Male assistant professors earn 23.8% more than females, while non-tenured males earn 31.6% less than females. Within identical academic ranks, there are gender pay gaps between assistant professors and non-tenured faculty. By analyzing 301 assistant professors, this paper identifies the academic field as another factor in pay discrepancies across academia, particularly among business, medical, social science, and STEM disciplines. However, gender doesn’t contribute to the pay disparity problem when faculties are under the same academic rank and field of study. Given this paper does not utilize datasets for any private institutions or colleges in other states, the paper’s findings can only be generalized to public universities in the U.S. I. Introduction In August 2023, five of Vassar’s female professors sued the college for wage discrimination against female faculties. According to the Washington Post, full-time male professors at Vassar earn an average annual salary of about $154,200, while their full-time female professors only earn $139,300. In this lawsuit, advocates for Vassar professors argued that the gender pay gap arose due to substantial differences in starting salaries. In academia, there exists a merit rating system biased against females, alongside a discriminatory promotion process that systematically prevents or delays the advancement of female professors compared to males. This gender bias and stark compensation difference between male and female faculty members does not only happen at Vassar College. The American Association of University Professors (AAUP) finds that full-time women professors make up 82% of what their male colleagues earn across academia. Recent lawsuits in Vassar have alleged wage discrimination against female professors, raising questions about the presence of the gender pay gap in academia across the United States. The College and University Professional Association for Human Resources has discovered persistent pay disparities for females in staff and faculty positions at colleges and universities across the United States. Academic researchers and policymakers hypothesize that the gender gap in earnings persists because it is hidden intentionally (Trotter et al., 2017). Given that limited research focuses on the gender pay gap for higher education in the United States, this paper aims to provide evidence of how male and female academic faculties differ in their earnings at colleges or universities in the United States. Moreover, this paper also investigates non-gendered factors contributing to the pay gap in academia across the United States like academic rank and field. Using the University of Iowa as its primary data source, this paper examines whether academic rank, academic field, and gender account for the pay disparity for higher education in the United States. Motivated by Koedel and Pham’s research in 2023, I categorize determinants of salary disparity into two areas: conditional gaps and unconditional gaps. Since compensation differences can be explained partially by the level of faculty’s skills and contributions, the conditional gaps include academic rank and academic field. The remaining unexplained portion of the pay gap falls under the unconditional gap like gender. Faculties with higher seniority typically take more responsibility in teaching and administrative tasks, have more years of experience, and potentially make more substantial contributions to research. Therefore, academic rank has the most significant impact on pay disparity in academia. Besides academic rank, the academic field is the second most influential factor. As some academic fields like medicine or business are historically more prestigious or better funded for research, it leads to a greater pay discrepancy when faculty members have identical academic ranks. As a result, I hypothesize that gender is the least influential factor in the pay gap problem. II. Background President Obama proposed the White House Equal Pay Pledge in 2016 to narrow the gender wage gap across the United States. Following Obama’s campaign, academic institutions formed new committees and commissions on college campuses to address the gender pay inequality problems. For instance, Louisiana State University established the Council on Gender Equity to emphasize gender pay equality. However, research still shows a persistent gender pay gap among university faculty across the United States. AAUP finds that a full-time female professor earned roughly $82 for every $100 a full-time male professor earned in 2023. This compensation difference between male and female full-time professors raises questions about the existence of a gender pay gap for higher education in the United States. Does pay depend on the skills and contributions of males and females equally, or does bias result in differential salary solely based on the individual's gender? From the late 1980s to the mid 2010s, previous research indicates that the gender pay gap, a form of unconditional gap, accounts for 20% of wage difference at research universities (Koedel & Pham, 2023). Besides the unconditional gap, the conditional gap contains the unexplained portion of the pay gap in academia. The conditional gap is typically associated with the academic field, years in the position, and peer performance evaluations. In contrast to the unconditional gap, the conditional gap only accounts for 4% to 6% of wage difference, or 20% to 30% of the unconditional gap (Li & Koedel, 2017). Given that the unconditional gender pay gap is almost three to five times bigger than the conditional gap, this sizable difference underscores the substantial influence of gender on salaries within academia. Furthermore, this finding also suggests that the pay gap in academia is more closely tied to gender-based disparities than to intellectual or performance measurements. III. Data My main estimates are based on an analysis of data from the Iowa Legislature for the University of Iowa in 2022. This annual census survey collects data on full-time and part-time teaching and administrative staff at degree-granting public universities and their affiliated colleges in Iowa State from July 1 to June 30 of the following year. The survey covers 14,295 employees in the University of Iowa, including administrative and support staff, librarians, all full-time tenure track faculties, part-time affiliated staff, adjunct staff, clinical staff in teaching hospitals, visiting scholars, and research staff who have academic ranks and salaries similar to teaching staff, for all those whose term of appointment is not less than 12 months. The main objective of this paper is to examine how male and female academic faculties differ in their earnings at universities or colleges in the United States. Thus, this dataset excludes clinical staff, visiting scholars, research staff, teaching assistants, administrative and support staff, and librarians. Since the Iowa Legislature mandates that state employees participate in the annual census survey, confidential pay information is obtained directly from the Department of Administrative Services without additional verification or editing. As the publicly available salary data is directly obtained from this State Employee Salary Book, the accuracy of data and publicly accessible features make the University of Iowa an ideal choice for this paper. Data about demographics, qualification data, and salaries were collected for all tenured, tenure-track, and non-tenured track faculty. Demographic data include each faculty member’s gender and county information. Qualification data include each faculty member’s academic rank and academic field. The salary data is directly obtained from the Iowa State Employee Salary Book payroll records. A limitation of the Iowa Legislature database is the generalizability problem. The database only includes state-funded public universities like the University of Iowa, Iowa State University, and the University of Northern Iowa. Other private universities or liberal arts colleges are not included in the Iowa Legislature database due to confidentiality concerns regarding wage information. This restriction poses a significant challenge in drawing broader conclusions about pay disparities across all public or private institutions in the United States. IV. Methods Given that the range of faculty’s wages varies from $1,000 to $1,685,834, this paper uses a log specification for salary to normalize the scales of the variables to make it less prone to outliers. To examine whether the faculty’s academic rank correlates with the pay gap, I analyzed 2,235 faculty members among tenured, tenure track, and non-tenured track at the University of Iowa for 2022. Tenured faculty typically secure lifetime professor employment after a six-year probationary period. Tenure track faculty hold positions as Associate Professor or Assistant Professor and are currently in the promotion and evaluation process towards attaining the status of tenured full professor. All faculties besides tenured or tenure track are classified as nontenured track faculties. The tenured faculties include 496 employees as full-time professors. The tenured track includes 415 employees as full-time associate professors or assistant professors. The non-tenured track includes 1,324 employees as part-time Adjunct staff, Professors of Instruction, Professors of Practice, and Lecturers. The general regression model is represented as follows: log Salary= β1AssociateProfessor + β2NonTenured + β3Professor + β4Male + β5AssociateProfessor Male + β6NonTenured Male + β7Professor Male + α + ε where Male is an indicator variable that equals 1 if the assistant professor is a male and 0 if not. AssociateProfessor, Professor, and NonTenured are indicator variables that equal 1 if the faculty member is associate professor, professor, or non-tenured track accordingly, and 0 if not. To test for differential returns for ranking by sex, I include three interaction variables: AssociateProfessor Male, Professor Male, and NonTenured * Male. These interaction variables are equal to 1 if the faculty member is a male assistant professor, male professor, or male in a non-tenured track accordingly, and 0 otherwise. To testify whether the faculty’s academic field correlates with the pay gap, I controlled the academic rank effect and analyzed 301 assistant professors at the University of Iowa for 2022. Faculties are sampled from six academic disciplines: art, business, humanities, medicine, social science, and STEM (Science, Technology, Engineering, and Mathematics). Once a discipline was selected for sampling, all assistant professors listed on the department website were included in the dataset. The Arts discipline includes 22 assistant professors from the Arts Division in the College of Liberal Arts and Sciences. The Humanity discipline includes 21 assistant professors from the Humanities Division at the College of Liberal Arts and Sciences. The STEM discipline includes 61 assistant professors from the College of Engineering and Natural and Mathematical Sciences Division in the College of Liberal Arts and Sciences. The Social Sciences discipline includes 41 assistant professors from the Social Sciences Division in the College of Liberal Arts and Sciences and College of Education. The Business discipline includes 23 assistant professors belonging to the College of Business. The Medical discipline includes 133 assistant professors from the College of Medicine, Dentistry, and Nursing. The general regression model is represented as follows: log Salary= β1Business + β2Humanity + β3Medical + β4SocialScience + β5STEM + β6Male + β7Business Male + β8Humanity Male + β9Medical Male + β10SocialScience Male + β11STEM Male + α + ε where Male is an indicator variable that equals 1 if the assistant professor in the arts field is male and 0 otherwise. Business, Humanity, Medical, SocialScience, and STEM are indicator variables that equal 1 if the faculty member is business, humanity, medical, social science, or STEM accordingly, and 0 if not. To test for differential returns for the academic field by sex, I include five interaction variables: Business Male, Humanity Male, Medical Male, SocialScience Male, and STEM * Male. These interaction variables are equal to 1 if the assistant professor is a male in business, humanity, medical, social science, or STEM, and 0 otherwise. V. Results V.a.1 Academic Rank on Salary AAUP finds that the average annual salary in 2021 for assistant professors, associate professors, and professors was about $83,300, $96,000, and $140,500 respectively. Professors typically have more years of experience and research grants, so they tend to have higher salaries than associate professors or assistant professors. From this result, I hypothesize that academic rank correlates with pay disparity in academia. To test my hypothesis, the regression in Table 1 compared each faculty member’s salary based on their academic rank, with the expectation that professors have the highest wages compared to associate professors, assistant professors, and non-tenured track faculties. The non-tenured track faculty in my dataset consists of adjunct staff, lecturers, Professors of Instruction, and Professors of Practice. In the University of Iowa, the Professor of Instruction and Professor of Practice are instructional faculty solely responsible for teaching and not involved in administrative duties. Since the hiring criteria for instructional faculty is less rigorous in terms of research and scholarship requirements as compared to those for tenured or tenure track faculty, instructional faculty tend to receive lower compensation. In this regression, the adjusted R2 is 54.8%, which is consistent with my hypothesis that academic rank accounts for salary disparity. The β1, β2, and β3 coefficients indicate relative salary differences for female associate professors, non-tenured track faculties, and professors compared to female assistant professors. Because of higher ranks, the predicted signs for β1 and β3 coefficients are positive, suggesting that female associate professors or professors receive higher salaries compared to female assistant professors. However, the β1 coefficient is negative and not statistically significant at 0.01 level, suggesting pay is not increasing for associate professors. The β3 coefficient is positive and suggests that female Professors earn relatively 26.3% higher than female assistant professors. Since non-tenured track faculties are part-time basis or instructional faculty, they receive lower compensation than their tenured or tenure track colleagues. Thus, the predicted sign for β2 is negative. The β2 coefficient is negative and suggests female non-tenured track faculty earn relatively 173.7% less than female assistant professors. Given both Professor and NonTenure variables are economically significant and statistically significant at 0.01 level, I conclude that academic rank accounts for wage discrepancies only in the professor and non-tenured track faculty levels. V.a.2 Gender Effect on Salary Within Same Academic Rank AAUP found that full-time women professors earned 82 cents for every dollar their male counterparts earned in 2023. This compensation disparity motivated me to investigate whether gender influences salary within identical academic rank conditions. In Figure 1, male faculty members earn much more than their female counterparts under professors, associate professors, and assistant professors levels. However, female non-tenured faculty members receive higher salaries than their male colleagues. Therefore, within the same academic rank, I hypothesize that gender accounts for pay discrepancy. To test my hypothesis, the regression in Table 1 compares each faculty member’s salary based on their gender when they have identical academic ranks. The β4, β5, β6, and β7 coefficients indicate relative salary differences for male assistant professors, associate professors, non-tenured track faculties, and professors compared to their female counterparts under the same academic rank condition. I expect males to earn less at the non-tenured level while earning more at the professor, associate professor, or assistant professor level. The predicted sign for β6 is negative while β4, β5, and β7 are positive. However, both β5 and β7 coefficients are not statistically significant at 0.01 level, suggesting that the pay gap for the associate professor and professor level cannot be explained by gender reasons. In contrast, the β4 coefficient is positive and suggests that male Assistant Professor earns relatively 23.8% higher than female assistant professors. The β6 coefficient is negative and suggests that male non-tenured track faculties earn relatively 31.6% less than female non-tenured track faculties. Since both Male and NonTenure * Male variables are economically significant and statistically significant at 0.01 level. I conclude that gender bias accounts for wage disparity only in assistant professor and non-tenured levels when both female and male faculty have identical academic ranks. V.b.1 Academic Field on Salary Within the Same Academic Rank Under the same academic rank, is gender the sole factor contributing to the wage disparity at the University of Iowa? Previous research suggests that academic rank and academic field account for 4% to 6% of wage difference in academia (Li & Koedel, 2017). To examine whether the academic field impacts wage differences, I analyzed 301 assistant professors across six academic departments at the University of Iowa in 2022. The regression in Table 2 compared each assistant professor’s salary based on their academic field. I anticipate that medical, business, and STEM assistant professors will have the highest wages relative to other disciplines. Professions like doctors, investment bankers, and software engineers are known for their lucrative salaries. As a result, students are more likely to declare majors in medical, business, or STEM subjects. To meet the growing demand for these majors while providing a more robust academic curriculum, universities, and liberal arts colleges offer competitive salaries to attract top-tier talent for teaching positions. Therefore, I hypothesize that the academic field accounts for wage discrepancy at the assistant professor level. In this regression, the adjusted R2 is 32.9%, which is consistent with my hypothesis that the academic field accounts for salary disparity. The β1, β2, β3, β4, and β5 coefficients indicate relative salary differences for female assistant professors in business, humanity, medicine, social science, and STEM compared to the female assistant professors in the arts field. Since the business, medical, and STEM fields provide lucrative salaries, the predicted signs and values for β1, β3, and β5 are positive and mathematically larger. It suggests that female assistant professors in business, medicine, or STEM receive higher salaries than those in the art department. Since the β2 coefficient is slightly positive and not statistically significant at 0.01 level, it suggests the salary difference for female assistant professors between humanities and art disciplines is negligible. In contrast, the β1, β3, β4, and β5 coefficients are positive, which suggests that female assistant professors in the business, medical, social science, and STEM fields earn a relatively higher proportion of salaries than those in the art department. Given that the β1, β3, β4, and β5 coefficients are positive and statistically significant at the 0.01 level, I conclude that the academic field accounts for wage disparity for assistant professors in business, medical, social science, and STEM disciplines. V.b.2 Gender Effect on Salary Within Same Academic Rank and Field In Figure 2, I observed the average annual salary between male and female assistant professors separately within the Arts, Business, Humanity, Medical, Social Science, and STEM fields at the University of Iowa in 2022. Within the same academic field, the greatest gender gap is $110,510 in the Medical department while the smallest is only $558 in the Humanities department. Given that the gender pay gap exists among all six departments, I hypothesize that gender accounts for pay disparity when female and male assistant professors are within the same academic field. To test my hypothesis, the regression in Table 2 compared each assistant professor’s salary based on their gender when they are in the same academic discipline. The β6, β7, β8, β9, β10, and β11 coefficients indicate relative salary differences for male assistant professors in Arts, Business, Humanity, Medical, Social Science, and STEM compared to their female assistant professors under the same academic field condition. Since the β6, β7, β8, β10, and β11 coefficients are economically insignificant at the 0.1 level and statistically insignificant at the 0.01 level, I find that there is not any gender pay gap under the assistant professor level within the Arts, Business, Humanity, Social Science, and STEM departments. However, the β9 coefficient suggests the relative salary difference for male assistant professors is 32.4% higher than for female assistant professors within the medical field. As the β9 coefficient is economically significant at 0.1 level while statistically insignificant at 0.01 level, I identify that there is no gender pay gap under the assistant professor level within the medical field. In sum, gender does not account for wage disparity at the assistant professor level within the same department condition. V.b.3 Gender Effects for Assistant Professor in Medical Field Previous research suggests that women may encounter greater pay inequality in which they are underrepresented within a field (Casad et al., 2022). In Figure 3, I find 23 more male assistant professors in the Medical department, while the faculty number difference is less than 10 for each Arts, Business, Humanities, and STEM department. The economically significant relationship is present only in the Medical department due to the underrepresentation of female assistant professors. In the University of Iowa case, there are 58.6% male and 41.4% female assistant professors within the medical field, which implies that females are underrepresented in the medical department. As the β9 coefficient is economically significant at 0.1 level while statistically insignificant at 0.01 level, it partially confirms my hypothesis that the mismatch between male and female faculty numbers leads to the gender pay difference. Historical reasons point to why the medical department has more male faculty members than female. Typically, students need to take three or four years of education in medical schools along with three to nine years of medical training before they enter hospitals or academia. The Association of American Medical Colleges (AAMC) finds that the average age for assistant professors in United States Medical Schools is 45.5 and 43.2 years old in 2023, which suggests the current assistant professors received their M.D. or D.O. degree from medical schools between 1997 and 2004. According to the AAMC records, from 1997 to 1998, 58.3% of medical school graduates were males while only 41.7% were females. From 2003 to 2004, 54.1% of medical school graduates were males and 45.9%were female. Even though the medical department still has high male representation, the rise of advocacy for women in STEM and the increased proportion of female medical school graduates from AAMC records imply that the gender pay gap in the medical field is likely to narrow in the future. VI. Conclusion I use the University of Iowa in 2022 as my dataset to investigate factors accounting for wage disparity in higher education in the United States. The findings show that academic rank explains wage differences in professors or non-tenured track faculty levels. Within the same academic rank, the gender pay gap only exists for assistant professors or non-tenured track level. Besides academic rank, the academic field also accounts for the wage discrepancy when I limit my dataset to only focus on the assistant professor level. The pay gap arose among business, medical, social science, and STEM disciplines. However, when two faculty members have identical academic ranks, there is no gender pay gap within the same department. To improve and expand on this research, diversifying the dataset must be a key focus by adding more public and private universities or colleges. A large dataset would provide a comprehensive perspective on whether the gender pay gap in academia is a nationwide inequality problem or a local inequality problem inside Iowa. If the study reveals stark differences between male and female faculty, it would be advisable to inform policymakers of the severity of the issue and propose equity focused policies such as implementing pay transparency laws to reduce pay inequality and associated gender gaps. Bibliography Gabriel, D. (2023). Female professors sue Vassar College, alleging wage discrimination. https://www.washingtonpost.com/education/2023/08/30/vassar-college-wage-discrimination-lawsuit/ American Association of University Professors. (2023) “Annual Faculty Compensation Survey.” American Association of University Professors, June 2023, https://www.aaup.org/news/aaup reports-third-consecutive-year-faculty-wages-falling-short-inflation Koedel, C., & Pham, T. (2023). The Narrowing Gender Wage Gap Among Faculty at Public Universities in the U.S. SAGE Open , 13 (3), 21582440231192936. https://doi.org/10.1177/21582440231192936 Li, D., & Koedel, C. (2017). Representation and Salary Gaps by Race-Ethnicity and Gender at Selective Public Universities. Educational Researcher , 46 (7), 343–354. https://doi.org/10.3102/0013189X17726535 College and University Professional Association for Human Resources. (2024) “Representation and Pay Equity in Higher Education Faculty: a Review and Call to Action.” College and University Professional Association for Human Resources, April 2024, https://www.cupahr.org/surveys/research-briefs/representation-and-pay-equity-in-higher-ed faculty-trends-april-2024/ The Iowa Legislature. (2022) https://www.legis.iowa.gov/publications/fiscal/salarybook Casad, B. J., Garasky, C. E., Jancetic, T. R., Brown, A. K., Franks, J. E., & Bach, C. R. (2022). U.S. Women Faculty in the Social Sciences Also Face Gender Inequalities. Frontiers in Psychology , 13 , 792756. https://doi.org/10.3389/fpsyg.2022.792756 Trotter, R. G., Zacur, S. R., & Stickney, L. T. (2017). The new age of pay transparency. Business Horizons , 60 (4), 529–539. https://doi.org/10.1016/j.bushor.2017.03.011 Wiedman, C. (2020). Rewarding Collaborative Research: Role Congruity Bias and the Gender Pay Gap in Academe. Journal of Business Ethics , 167 (4), 793–807. https://doi.org/10.1007/s10551- 019-04165-0 Association of American Medical Colleges. (2023) “U.S. Medical School Faculty Trends: Average Age.” Association of American Medical Colleges, Dec 2023, https://www.aamc.org/data reports/faculty-institutions/data/us-medical-school-faculty-trends-average-age Association of American Medical Colleges. (2019) “Percentage of U.S. medical school graduates by sex, academic years 1980-1981 through 2018-2019.” Association of American Medical Colleges, August 2019, https://www.aamc.org/data-reports/faculty-institutions/data/us-medical-school faculty-trends-average-age Appendix Figure 1. Average Salary for Female and Male Faculties in Each Academic Rank Figure 2. Average Salary for Female and Male Assistant Professors in Each Academic Department Figure 3. Number of Female and Male Assistant Professors in Each Academic Department

  • The Growing Incoherence of Our Higher Values

    Aash Mukerji The Growing Incoherence of Our Higher Values Aash Mukerji Nihilism is perhaps the most commonly misunderstood notion in Friedrich Nietzsche’s writings. Not only do many wrongly believe Nietzsche to advocate for nihilistic behavior, but many also see nihilism as the loss of all value and synonymous with the belief that everything is meaningless and valueless. In reality, Nietzsche defines severe nihilism as “the conviction of the absolute untenability of existence when it comes to the highest values that are acknowledged”(1). For Nietzsche, nihilism thus does not necessarily reduce the individual to a living lump of ennui. Rather than lacking all value judgements, Nietzsche portrays nihilism as a condition characterized by the absence of justifiable higher values. This supposed depletion in justification comes from Nietzsche’s infamous assertion of the death of God; Nietzsche held that modern science has made “belief in the Christian God unbelievable” (2). Nietzsche believed that without divine reasons to cherish our higher values, we would ultimately lose them entirely. Moreover, Nietzsche characterizes nihilism primarily as a cultural phenomenon—the societal loss of higher values precedes and causes the affective individual symptoms of nihilism. Nietzsche sees this cultural wave of nihilism as a looming threat; he predicts that humanity is on the brink of succumbing, to becoming nothing more than a group of “last men” Last men are characterized by the aforementioned deficiency in higher values, effectively rendering them incapable of justifying any goals that do not immediately benefit them (3). Nietzsche makes the impending nature of nihilism clear in Zarathustra , where the titular character is confronted by a chorus of individuals who actually wish to become last men (4). Nietzsche’s assertion of the imminence of nihilism was something of great interest to me as it seems that, even in the last two hundred odd years, our higher values have not been lost entirely. Nonetheless, I was not ready to entirely discount Nietzsche’s worries concerning our higher values, and this paper discusses a different manner in which our relationship with them may be deteriorating. In the wake of the death of God, what we are losing may not be our higher values themselves, but instead the unifying principles that require consistency and soundness among them. I will argue that we are progressing towards a world where our higher values are maintained but do not necessitate coherence in order to inform and justify our actions. Indeed, some incoherent higher values evidently already enjoy primacy over other kinds of values. I will attempt to demonstrate this by showing that, though contemporary society has preserved various higher values, individuals and communities frequently act in ways that conflict with those values without recognizing any logical inconsistency. This implies that what is missing from our higher values is the necessity for harmony with our actions and the other values we hold. In this paper, I will discuss some ideologies maintained today that seem to fit the characterization of higher values but conflict with our day-to-day activities and other values. I will attempt to supply some explanation for what causes this incoherence both through a Nietzschean lens and through the analysis of media culture within the framework of Jean Baudrillard. I believe both perspectives provide valuable insight into the mechanics of what is going on. Throughout this paper, I essentially seek to prove that we have retained our higher values but are losing their coherence and structure. First and foremost, we must establish some higher values that have been preserved. In my view, the most prevalent ones seem to be the political and social ideologies we subscribe to individually and culturally. For this paper, I will primarily consider liberalism and conservatism in America as typical instances of these types of values. To distinguish higher values from other more standard values, I will make use of the criteria detailed by Katsafanas in his paper, “Fugitive Pleasure and the Meaningful Life: Nietzsche on Nihilism and Higher Values.” These criteria include demandingness, tendency to generate tragic conflicts, regular induction of strong emotions, professedly great import, exclusion of other values, and propensity for creating communities (5). As far as I can tell, political ideologies seem to instantiate all of these criteria. They are certainly demanding; liberals and conservatives both generally see their chosen credo as the “correct” way to live and believe that it is immune to any sort of compromise. For either group, their ideology does not (in theory) allow them to be frivolous with their moral and political choices or to deviate from the prescribed guidelines is often perceived as a violation of some sort of ethical code. When conflict between our political ideologies and other higher values is acknowledged, such discord is often seen as tragic. For instance, nearly everyone has heard of individuals that have experienced, or have themselves experienced, intense strife with family members due to political disagreements. Family, as a general construct, is widely treated as a higher value. Familial bonds are demanding insofar as compromising them is seen as betrayal of the highest degree, they induce powerful emotions, acting for the sake of one’s family is seen as sufficient to explain most actions, family is often presented as taking priority over all other pursuits in life (to the point of being exclusionary), and family, of course, instantiates strong communities. So, when we experience conflicts between our political ideology and our family, such conflicts are nothing short of agonizing. Is it morally permissible to cut off one’s family members because they are conservative or liberal or libertarian? Is the gap in ideology something so forceful that it ought to trump deep familial bonds? Such questions are not easy to answer (for most) and the dilemma one finds oneself in when faced with them is most definitely viewed as a tragic one. Even if one is not sold on the status of family as a higher value, there are numerous others that can be substituted to illustrate my point. Here I will include a brief clarification that will prove important further on in this paper: The conflict between higher values must be acknowledged. My characterization of political ideology as a higher value relies partially on the notion that if one identifies a conflict between one’s political ideology and another higher value, then such conflict will be viewed as tragic. However, should one have an unrecognized logical conflict between one’s ideology and another higher value for whatever reason (for instance due to growing incoherence in our higher values), then this trait does not go unfulfilled merely because such a conflict goes unnoticed. Elicitation of strong emotions when it comes to political ideology needs no lengthy justification. One merely needs to survey the landscape of almost any social media platform to witness the masses loving, hating, condemning, and worshiping political figures. Likewise, it seems obvious that we believe political ideology is more important than the vast majority of things in our lives. At their core, ideologies such as liberalism and conservatism exist to function as banners of the things we value most in life. For the strong liberal or strong conservative, the very essence of what it means to live a good life is often synonymous with their dogma. Lastly, it is needless to say that political ideology has become exclusionary in nature and is prone to instantiate powerful communities. This phenomenon is represented best by the American political system where, by being part of one community, you are by default excluded (and often even looked down upon) by rivaling groups. As people increasingly and overwhelmingly define themselves and others based on their choices in politics, one’s ideology is commonly seen as central to one’s character. The intense political polarization that has resulted from this is testament to just how exclusionary political ideology has become and how robust the coalitions formed based on such ideology can truly be. Thus far, I have endeavored to establish political ideologies as common examples of higher values that are still held by people today. From here, I aim to show that such higher values are not being lost, but rather becoming increasingly incoherent. To this end, I would invite my reader to consider how party politics works in America. Generally speaking, the Democratic Party is meant to model liberalism and represent liberal people, and, conversely, the Republican Party is supposed to model conservatism in action. But can we confidently say that those values are the basis upon which each party unerringly acts? I have instead found their condition to be best described by the ideas of Baudrillard, whose framework I will use to illustrate what is going on. Of contemporary society, Baudrillard says: “Abstraction is no longer that of the map, the double, the mirror, or the concept. Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal. The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory - precession of simulacra” (6). In short, rather than our values and ideals informing the models we use to structure society, the models have begun to determine our values. In this case, instead of political parties typifying our liberal or conservative ideals, it seems increasingly true that the parties are influencing and warping our values. If we accept this Baudrillardian understanding, it seems evident that what retains vital importance in modern society is not our higher values themselves but the models that now precede them. This account also explains why incoherence of higher values seems to be on the rise; we still perceive our higher values to be what drives our society forward, even though this is not the case, which causes a disconnect between the individual and the weight of their own values. In addition, because our models have started to inform our values, we are no longer able to distinguish which actions we take on behalf of a higher value and which we take on behalf of a mere imitation of one. Even worse, the solution is no longer as simple as critically analyzing our values in order to discriminate between which ones are legitimate and which ones are mere simulacra; our genuine higher values have started to emulate the misshapen versions of them embodied by our models. The Baudrillardian fall from grace notably has two distinct steps: First, the models of our values (in this case our political parties) seem to operate completely independently from, and often in contradiction with, our actual values. Second, in a more sinister fashion, our values themselves are altered in a manner that breeds incoherence and an inability to grasp the inconsistencies in our beliefs. This transmutation occurs, on Baudrillard’s account, through media culture (7). This fits nicely with my argument, as it seems overwhelmingly obvious that the media is now inextricably entwined with politics, meaning our political ideology is especially susceptible to modification by mass media. For those readers who are skeptical about the weight Baudrillard and I are assigning to media culture, I will justify this further on in this paper. For now, however, I will provide some evidence that the process I have just described is in fact reflective of our society. First, consider America’s involvement in the ongoing conflict between Israel and Palestine. For the fiscal year 2021, the Trump Administration sought $3.8 billion to support Israel’s military spending (8). Theoretically, this should be a big concern for Republicans. Since the reduction of taxes and minimization of governmental scope are undoubtedly two of the main goals of conservatism, and purportedly the Republican Party, it seems as though the Party ought to support decreasing tax-funded aid to Israel. And yet, studies show that the vast majority of Republicans believe that Trump has “struck the right balance” in dealing with the Israeli-Palestinian conflict (9). As we can see, even if reducing our financial assistance to Israel is in the best interests of conservatism, self-identified conservatives consistently act contrary to this because that is what their party leaders convey to them. A Republican might object to this characterization, on the grounds that America has a responsibility to intervene in areas where it has deemed human rights violations are occurring. But if that is the case, then how can the Republican simultaneously support the construction of a border wall designed to prevent persecuted Latin Americans from fleeing for their lives? (10). The significance of all this is that the status of our higher values, in relation to their models, becomes dubious. This issue seems to mimic the first step quite clearly in our Baudrillardian process. The U.S. support of Israel is but one example of this phenomenon wherein actions resulting from our models are completely separate from the actions that would normally be dictated by our higher values. Next, let’s discuss an example of the second step of the Baudrillardian framework. Firearm legislation is another controversial issue in America at the moment, with various groups holding drastically different positions on whether one has the right to bear arms. Through this issue, I hope to demonstrate the ways in which the incoherence of higher values can lead to illogical stances on both ends of the political spectrum. When it comes to gun control, conservatives are generally in favor of fewer restrictions. This is largely consistent with core conservative beliefs, such as minimizing government input on private lives and preserving the liberties provided by the second Amendment. As a matter of fact, widespread gun ownership is not only morally permissible but even necessary , many conservatives say, in case the state ever decides to infringe on the rights of its citizens or coerce them without due cause. For the conservative, guns are thus a mode through which the individual can retain power over the state. So far, nothing seems wrong. But issues arise when other beliefs, supposedly in line with the same brand of conservatism, are added to the mix. While retaining this belief in the need, and indeed the moral right, to protect oneself from an unjust state as one sees fit , conservatives in America today are also associated with the position that it is unpatriotic and morally reprehensible to kneel during the anthem or otherwise protest the brutality and violence that occurs through the arm of the state, i.e., the police. It seems to me that simultaneously holding these two beliefs is something that is very difficult, if not impossible, to maintain. And yet, these are often considered standard conservative and Republican positions in our society. Contemporary liberals do not fare much better when their higher values are analyzed in this context. For the liberal, government institutions in America have a long history of systemic racism and oppression of minorities and lower classes. Such institutions thus ought to be overhauled or rectified, the logic goes, in order to form a society that reflects more liberal values. Yet at the same time, there is a common liberal view that guns are instruments of death and should be withheld from all except government employees who require them for their job, such as police officers and military personnel. But this view seems to remove an opportunity for the individuals oppressed by the state to gain power, and instead places that power squarely in the hands of the oppressors. The liberal cannot have their cake and eat it too; to believe that the police should be defunded because of their routine violence against the people they ought to be protecting, and simultaneously believe that the state should have full authority and exclusive control over all firearms, seems problematic at best. Even if the liberal attempts to avert this problem by going even further and asserting that nobody should own a gun, then a similar problem arises. It is still the same oppressive and racist state that takes guns away from people. It is still the same state that ultimately retains the power in this scenario. These are just some ways in which our higher values have begun to show signs of incoherence. Unlike our first step, this second step is no longer just a matter of us acting in line with our models while wrongly believing that they are reflections of what we ultimately value. If we could stop after the first step, there would still be a dim hope of redemption. If one can be shown the inconsistencies between their values and the actions of their party, it seems as though they can revise their mode of life. But the incoherence of the second step is far more deadly. Our higher values themselves are being changed; they are becoming muddled and losing intelligibility rapidly. Recommitting oneself to one’s values when faced with inconsistency is already exceptionally difficult but refashioning one’s values when faced with complete incoherence is even more demanding. Gun control is just one example of this, but these types of discussions all beg the same question: Are conservatives and liberals determining what their party stands for, or is it their parties that are deciding what the ideology stands for? It may be, in the words of Baudrillard, “no longer a question of imitation, nor duplication, nor even parody. It is a question of substituting the signs of the real for the real” (11). This Baudrillardian diagnosis of society is explained by many factors, but primary among them is the rise of media culture. Overwhelmingly, the types of media we consume has come to define what it means to be social in our culture. The interactions we have with friends, family members, significant others, strangers, all are determined by the media we absorb. Take romantic encounters, for instance. The ways in which we decide how we ought to act towards our partners, what sort of romantic gestures are considered socially acceptable, what kind of boundaries we set, are all largely, if not entirely, defined by what we have seen in social media, films, television, advertisements, and so on. One need look no further, Baudrillard says, than to observe that, “whoever is underexposed to the media is desocialized or virtually asocial” (12). Such a state of affairs would be fine, of course, if most forms of media faithfully represented and depicted our higher values, but the opposite seems to be the case. Consider the social phenomena commonly referred to as “virtue signaling” or “performative activism.” In particular, let us contemplate the cases in which one virtue signals without actually doing much to pursue that virtue. In cases like these, many remain unaware of their hypocrisy and nonetheless believe that they act virtuously when, in fact, they are merely presenting the facade of virtue. The mere posting of a black square on one’s Instagram account without any further action to support African American communities comes to mind as a relevant example of this. One might say that all individuals who engage in such signaling do so consciously—they are aware that they are “faking it” in order to achieve popularity, acceptance into a social group, or something of this nature. But this seems like an overly pessimistic claim, and I would characterize the phenomenology of such individuals differently. I would argue that most people that act in these ways genuinely believe that they are pursuing their ideal of a virtuous life. They do not recognize that their virtues (which are closely related to, if not synonymous with, their higher values) are not informing their actions. Rather, they are acting according to the media-warped model of what it means to instantiate that virtue. To make this more concrete, take the notion of equality to be a higher value or virtue that one strives towards. If equality was truly what was informing the behavior of the virtue signaling person, then such a person would seemingly recognize that their actions are not satisfying that higher value. Thus, it seems much more likely that what the virtue signaler is motivated by is not the pure higher value of equality, but rather an incoherent version of it altered by media culture. The people who post black squares on their Instagram and then go about their daily life feeling excellent about their stand against police brutality and institutionalized racism certainly feel as if they have higher values (e.g., equality, liberalism), but such values have been rendered incoherent. The proof of this incoherence is of course that their higher values are (even partially) satisfied by trivial actions that provide no substantive change in one’s way of life or the world. In addition, as referenced earlier, such people do not experience the tragic feelings that are meant to accompany conflict between higher values because they do not recognize that such conflict exists in the first place. We can perhaps judge from the outside that there seems to be an objective disconnect between these people’s purported higher values and their actions, but the growing incoherence of their values prevents the perpetrators themselves from coming to the same conclusion. At this juncture, one might object that if our higher values have become so vacuous that they can be fulfilled by such superficial action, then it is likely that they are not higher values at all anymore. In this regard, it would seem that we have ultimately returned to Nietzsche’s hypothesis and lost our higher values entirely. To this I would reply that these incoherent higher values may very well be vacuous, but they retain their status, nonetheless. Political ideology, for instance, still instantiates all the higher value criteria, as I have discussed. What this shows is that our immediate societal condition is distinct from that of the last man. We still have the capacity to cherish things in all the right ways and set goals for ourselves beyond immediate gratification, it is just that the things we cherish and the goals we set may be severely distorted. Now that we have touched on how our current situation is different from the last man, a question naturally arises: What would Nietzsche say about the state of our political ideology? When it comes to politics, Nietzsche is remarkably silent. Try as one might, it is rare to find Nietzsche discussing political ideology at great length. Though he is often found criticizing democracy as a “conspiracy of the whole herd against [its] shepherd,” this does not amount to much in the form of a distinct political structure (13). We also get some cryptic allusions to the merits of a natural order-based caste system in The Antichrist , but this discussion seems less about the desirability of the castes and more about how even this sort of ideology is preferable to the life-negating belief system that characterizes Christianity (14). Rather than focusing on political structures in service of the many, it seems like Nietzsche was more interested in particular individuals that could serve as paragons of vice or virtue. Napoleon is an oft-cited example of someone Nietzsche admired very much, going so far as to name him one of the “profound and largeminded men of [the] century” (15). But Nietzsche’s view of Napoleon as a higher man could justify an entire paper by itself, and such analysis is not particularly relevant to our discussion of higher values. Rather than present Nietzsche’s (scarce) ideas on political ideology, it seems more fruitful to examine how concepts like the death of God might have led to our current predicament. Nietzsche, of course, saw the loss of higher values as a direct result of the absence of a divine entity able to furnish our choices and goals with meaning. But what may be more potent than the loss of objective meaning is the loss of the structure installed by that divinity. Organized religion generally aims to provide a clear system concerning the fulfillment of our higher values. It lays out, for instance, what constitutes a sin, how to worship properly, and so on. Ergo, religion serves to enforce a uniformity between our actions and our values. It is fundamentally designed not to allow individuals to both violate their own higher values and escape with a morally sound conscience. But without religion, this necessity for consistency between our higher values and our behavior is damaged. There is no eternal damnation, no divine punishment, no karmic justice to threaten us to maintain such cohesion, and so we lose it. What all of these topics have in common, from virtue signaling to contradictory stances on gun control, is the apparent inconsistency in the higher values that allow such behaviors to take place. Ultimately, I would suggest that the path the individual in contemporary American society has taken is distinct from the condition of the last man that Nietzsche fears. Higher values persist, as I have endeavored to show, but the logical consistency required to fulfill them properly seems to be rapidly deteriorating. Nevertheless, if one wishes to remain compatible with Nietzsche’s theory, then we could perhaps frame the current state of society as merely one of the final steps on our inevitable trajectory to becoming last men. After all, it does seem plausible that the degeneration of the coherence of our highest values will eventually lead to the loss of them entirely. Such an understanding does raise some concerns, however, as discarding our higher values (“believing in their untenability” as Nietzsche puts it) seems to require one to be aware of their unintelligibility. Insofar as we have reached a state where our higher values no longer even need to be consistent in order for us to act on them, one can wonder whether we will ever collectively reach a position where we realize the inconsistencies exist so deeply in our higher values that they must be abandoned altogether. Still, there is certainly a Nietzschean argument to be made that there must be a limit to how incoherent our higher values can get before we rid ourselves of them in disgust. As it stands now, even though our higher values have garnered significant incoherence, they can still be said to represent us faithfully for the most part. For example, though issues like gun control demonstrate some glaring problems that require rectifying, I would argue American liberals and conservatives still tend to largely act on the values at the core of their ideology. Single-payer healthcare is a good instance of this: Liberals largely support it on the basis of their beliefs about equality and human rights, while conservatives largely do not because it would result in less individual freedom and greater taxes (16). Thus, though I have spent the majority of this paper painting quite a dismal picture, our higher values do not seem close to collapsing entirely. We may not have to resign ourselves to the fate of the fabled boiling frog, insofar as the coherence of our higher values continually gets worse without us noticing. One could certainly make a compelling case that when the incoherence gets to a stage where it overwhelms the proper functioning of our higher values, we will desert them. At such a juncture, it appears we would have no choice but to become last men. At any rate, if one is really committed to the Nietzschean hypothesis, one could call the state we are currently inhabiting that of the “penultimate man” (or better yet penultimate person, for the sake of inclusivity and alliteration). Endnotes 1 Friedrich Nietzsche, et al., Writings from the Late Notebooks , (Cambridge University Press, 2016), 205. 2 Friedrich Nietzsche and Walter Kaufmann, The Gay Science: With a Prelude in Rhymes and an Appendix of Songs: Translated, with Commentary by Walter Kaufmann, (Random: 1974), 343. 3 Friedrich Nietzsche, et al, Nietzsche: Thus Spoke Zarathustra , (Cambridge University Press, 2006), 129. 4 Ibid, 130. 5 Paul Katsafanas, “Fugitive Pleasure and the Meaningful Life: Nietzsche on Nihilism and Higher Values: Journal of the American Philosophical Association,” Cambridge Core , (Cambridge University Press, 2015), 9-11. 6 Jean Baudrillard and Sheila Faria Glaser, Simulacra and Simulation , (University of Michigan Press, 2019), 2. 7 Glenn Yeffeth, Taking the Red Pill: Science, Philosophy, and Religion in the Matrix , (Benbella Books, 2003), 74. 8 U.S. Foreign Aid to Israel , Congressional Research Service, 2020, fas.org/sgp/crs/mideast/RL33222.pdf. 9 “U.S. Public Has Favorable View of Israel's People, but Is Less Positive Toward Its Government,” Pew Research Center - U.S. Politics & Policy , 2020, www.pewresearch.org/politics/2019/04/24/u-s-public-has-favorable-view-of-israels-people-but-is-less-positive-toward-its-government/ . 10 Suzanne Gamboa, et al., “Why Are so Many Migrants Crossing the U.S. Border? It Often Starts with an Escape from Violence in Central America,” NBCNews.com , 2018, www.nbcnews.com/storyline/immigration-border-crisis/central-america-s-violence-turmoil-keeps-driving-families-u-s-n884956 . 11 Baudrillard andGlaser, Simulacra and Simulation , 2. 12 Ibid, 55. 13 Friedrich Nietzsche, The Antichrist, (Auckland, NZ: Floating Press, 2010), 67. 14 Ibid, 57. 15 Friedrich Nietzsche, Beyond Good and Evil , (New York, NY: Dover Publications, 1998), 256. 16 Bradley Jones, “Increasing Share of Americans Favor a Single Government Program to Provide Health Care Coverage,” Pew Research Center , 2020, www.pewresearch.org/fact-tank/2020/09/29/increasing-share-of-americans-favor-a-single-government-program-to-provide-health-care-coverage/ . Bibliography Baudrillard, Jean, and Sheila Faria Glaser. Simulacra and Simulation . University of Michigan Press, 2019. Gamboa, Suzanne, et al. “Why Are so Many Migrants Crossing the U.S. Border? It Often Starts with an Escape from Violence in Central America.” NBCNews.com , NBCUniversal News Group, 22 Oct. 2018, www.nbcnews.com/storyline/immigration-border-crisis/central-america-s-violence- turmoil-keeps-driving-families-u-s-n884956. Jones, Bradley. “Increasing Share of Americans Favor a Single Government Program to Provide Health Care Coverage.” Pew Research Center , Pew Research Center, 30 Sept. 2020, www.pewresearch.org/fact-tank/2020/09/29/increasing-share-of-americans-favor-a-single- government-program-to-provide-health-care-coverage/ . Katsafanas, Paul. “Fugitive Pleasure and the Meaningful Life: Nietzsche on Nihilism and Higher Values: Journal of the American Philosophical Association.” Cambridge Core , Cambridge University Press, 22 Sept. 2015, www.cambridge.org/core/journals/journal-of-the-american- philosophical-association/article/abs/fugitive-pleasure-and-the-meaningful-life-nietzsche-on- nihilism-and-higher-values/449B756CD8E5DC8139A701AC195F33F8. “Most Border Wall Opponents, Supporters Say Shutdown Concessions Are Unacceptable.” Pew Research Center - U.S. Politics & Policy , Pew Research Center, 21 Aug. 2020, www.pewresearch.org/politics/2019/01/16/most-border-wall-opponents-supporters-say- shutdown-concessions-are-unacceptable/. Nietzsche, Friedrich. Beyond Good and Evil . Dover Thrift Editions. New York, NY: Dover Publications, 1998. Nietzsche, Friedrich, and Walter Kaufmann. The Gay Science: With a Prelude in Rhymes and an Appendix of Songs: Translated, with Commentary by Walter Kaufmann . Random, 1974. Nietzsche, Friedrich, et al. Nietzsche: Thus Spoke Zarathustra . Cambridge University Press, 2006. Nietzsche, Friedrich, et al. Writings from the Late Notebooks . Cambridge University Press, 2016. Nietzsche, Friedrich Wilhelm. The Antichrist. Auckland, NZ. Floating Press, 2010. U.S. Foreign Aid to Israel . Congressional Research Service, 16 Nov. 2020, fas.org/sgp/crs/mideast/RL33222.pdf. “U.S. Public Has Favorable View of Israel's People, but Is Less Positive Toward Its Government.” Pew Research Center - U.S. Politics & Policy , Pew Research Center, 30 May 2020, www.pewresearch.org/politics/2019/04/24/u-s-public-has-favorable-view-of-israels-people-but- is-less-positive-toward-its-government/. Yeffeth, Glenn. Taking the Red Pill: Science, Philosophy, and Religion in the Matrix . Benbella Books, 2003. Previous Next

bottom of page