top of page

Can You Rationally Disagree with a Prediction Market?

Nick Whitaker
Brown University

April 2021


This paper brings together two literatures: That of the efficient
market hypothesis in economics and the relatively recent literature
on disagreement in epistemology. In economics, there has been a substantial discussion of how markets aggregate knowledge through prices. In the philosophy of disagreement literature, there is significant agreement that you should defer to those with more knowledge and better judgments than you. I argue that, given these
two conclusions, we are epistemically bound to defer to prediction
markets in most situations, though I discuss possible exceptions.

On the website PredictIt, one can find betting markets for a number of future political events like “Who will be the 2020 Democratic nominee?”, “Will Maduro be in office at the end of 2019?”, and “Will Trump be impeached in his first term?” Shares can be bought at and sold at floating prices between $.01 and $.99, and pay out a dollar if the bet wins. Share prices can be converted into probabilities, i.e. a $0.25 share represents a P(.25). I pose the question of how a rational person must change their credence when they encounter a prediction market like those on PredictIt. I will argue that under almost all normal circumstances, it is not rational to disagree with a well-functioning market. In fact, one should adopt the credence suggested by the prediction market. As I argue, a well-functioning prediction market is, on a given issue, almost always one’s epistemic superior with regard to that issue because it tends to incorporate all available information and aggregate that information effectively in its suggested credence. Given that you are likely not incorporating all available information nor aggregating that information into your credence, you are in a worse epistemic position about a given issue than its prediction market is. Thus, upon encountering a prediction market, you are rationally bound to adopt the credence suggested by it.

I see this conclusion as emerging naturally from the efficient market hypothesis (EMH) literature in financial economics and the burgeoning disagreement literature in epistemology, but I think the combined implications have been underappreciated. For example, together they imply that things which normally guide our credence on a given issue – like pundits’ predictions or our interpretations of poll data – ought to hold little influence, if any, over our credence relative to the credence suggested by a prediction market. In many cases, the suggested credence of a prediction market will be surprising. Often, one is inclined to reject it. After looking at enough prediction markets, on will almost certainly find a suggested credence that he or she is convinced is wildly wrong. Nevertheless, I will argue that we should adopt the credence implied by markets, despite how reluctant we may feel. Finally, as I will argue, if one refuses to adopt the credence implied by a betting market and they are not averse to betting, the most rational course of action would be to bet on that market.

The Market as an Epistemic Tool: A Short History
Markets are efficient. This is suggested by basic economic theory: Given competitive markets and free entry, if people knew the price of an asset would rise or fall, they would buy or sell the asset until the market price of the asset reflected that information. This obviously cannot be exactly the case in reality; no markets are perfectly competitive, and entry is never completely free. Yet just how efficient markets are has been surprising to many, as this fact has been established by over half a century of economic theory and empirical evidence. There are important outstanding questions about how strong a claim “efficiency” should imply, and whether there are times when markets are not efficient, referred to as market anomalies. But, let us begin by reviewing the history of the theory.

Efficient Markets Hypothesis was popularized by University of Chicago economist Eugene Fama. The idea began as an outgrowth of his dissertation, published in The Journal of Business in 1965 as “The Behavior of Stock Prices.” The paper argues that stock prices are essentially a “random walk” in that the “patterns” perceived in the past performance of a stock in no way indicate the stock’s future performance. Thus, for asset price movement to be a random walk, the future price needs to be independent of past prices. As Fama writes:

In statistical terms independence means that the probability distribution for the price change during time period t is independent of the sequence of price changes during previous time periods. That is, knowledge of the sequence of price changes leading up to time period t is of no help in assessing the probability distribution for the price change during time period t. (35)
Given that future prices are independent from past ones, no amount of study of the past performance of a stock will allow one to predict future performance. To the extent that the future is known, we should expect that information to be already incorporated into the asset price.

Thus, the question becomes: Why are future prices independent of past prices? It is this question that led Fama to his work on EMH. Roughly speaking, if markets are efficient, only what is truly unknown is not incorporated into an asset’s price, and thus future movements in that price are random. As investors with information about the future buy and sell a given asset, the price of that asset would come to reflect all available information about the future.

In 1970, Fama published his most cited paper, “Efficient Capital Markets: A Review of Theory and Empirical Work” in The Journal of Finance. In the paper, Fama clearly defined the terms on which a market can be considered efficient: “A market in which prices always ‘fully reflect’ available information is called ‘efficient.’” As this cannot be exactly true, Fama proposes it rather as a null hypothesis in the paper. To test efficiency more precisely, Fama delineates three versions of the hypothesis: The weak form, the semi-strong form, and the strong form. The weak form states that markets are efficient if future performance cannot be predicted from past prices. The semi-strong form states that markets are efficient if future performance cannot be predicted based on any publicly available information. The strong form states that the future performance cannot be predicted even with insider information.

As the title of the paper suggests, Fama discusses the theoretical and empirical foundations for our beliefs. The theoretical work is interesting, but perhaps not worth exploring here. Essentially, there are different mathematical models which are able to test different versions (weak, semi-strong, strong) of the hypothesis. As Fama explains, early studies mostly focused on the weak version, essentially testing for the independence of new prices from historic prices. As the weak version gradually became established, research turned to the semi-strong version, which can be thought of as the question of how quickly newly public information is incorporated into market prices. Fama finds that tests of the semi-strong EMH lend “considerable support” to the hypothesis (408). The strong hypothesis had not been extensively studied at the time, but limited evidence did show that insiders were able to generate super-normal returns, and thus asset markets are likely not strong-form efficient. As Fama concludes, “For the purposes of most investors the efficient markets model seems a good first (and second) approximation to reality” (416).

Since its publication, Fama’s account of EMH has faced decades of empirical scrutiny and has become a major topic of debate in finance. This is especially true as EMH challenges the value proposition of active money managers, who charge their clients fees to supposedly “beat the market.” According to EMH, their ability to do this is essentially as good as anyone else’s. Thus, many have sought to demonstrate market inefficiencies of which savvy investors could take advantage. There is a certain irony to this process, as whenever an inefficiency becomes known it becomes accounted for in the market prices and thus the prices cease to be inefficient. Conversely, in the event that everyone believed markets were efficient and invested passively, markets would become inefficient.

A major meta-analysis of EMH literature was conducted in 2002 by Burton G. Malkiel, one of Fama’s coauthors. As Malkiel writes, if market anomalies were abound, we would expect that actively managed funds would be able to capitalize on these anomalies and consistently make supernormal returns. Thus, we can test market efficiency by looking at whether actively managed funds have been able to outperform diversified index funds. When fees charged by actively managed funds are considered, investment firms have actually underperformed relative to the market (Malkiel 2002). This suggest that even to the extent markets are inefficient, we are bad at consistently identifying these inefficiencies.
Beyond Finance: Prediction Markets

The key insight of EMH is an epistemic one: Markets tend to reflect all publicly available information. Though the theory has been traditionally framed as a theory of asset pricing, research was quickly done as to what information about the future could be gleaned from asset prices. In 1975, Fama published a paper on “Short-Term Interest Rates as Predictors of Inflation” to do just that.
Since then, the idea has been taken further, to structure assets for the explicit purpose of efficiently aggregating information about the likelihood of future events. Economist Robin Hanson has been a major proponent of this, proposing that we develop “prediction markets,” sometimes referred to as “idea futures” or “information markets”. Typically, a prediction market is structured as a binary option that pays out either 0% or 100%. Thus, the price of bets can be converted into credence. If the prediction market is quickly incorporating all publicly available information, as EMH would suggest, then we can expect the credences suggested by the prediction market to be maximally informed.
Relative Epistemic Positioning

Given the epistemic power of prediction markets, we may ask how we should respond when we disagree with the credence suggested by the prediction market. Perhaps the initial question in any disagreement scenario should be: “Are you in a better position to judge B than your interlocutor is?” This question can be answered by considering a number of factors. Bryan Frances, in his book Disagreement, enumerates a number of considerations for epistemic positioning. Some of his main criteria are:
• Cognitive ability had while answering the question
• Evidence brought to bear in answering the question
• Relevant background knowledge
• Relevant biases
After these factors, we can follow the disagreement literature in categorizing our interlocutor as being an epistemic peer, superior, or inferior on the given issue B.

With this in mind, we can ask a central question: Given the efficiency of prediction markets, what is their epistemic positioning relative to us? If prediction markets are approximately semi-strong efficient, then, by definition, they are incorporating all public information into their implied credence. In most circumstances, one’s credence is formed with less information than all publicly available information. Depending on the topic, the difference in information between you and a given prediction market may be small or vast, but regardless, the prediction market is likely your epistemic superior in this regard.
Additionally, a prediction market is almost certainly has better judgment than you do. We can think of good judgment as aggregating information in an accurate way. Implicit in any bet is the bettor’s weighting of his or her evidence. Just as a bettor would have an opportunity to profit on unique evidence they possessed, so to could they profit from weighting the evidence in a more accurate way than others. For example, we can imagine a prediction market on a mayoral race for a town. Let us assume that all of the relevant information about the race was comprised of three polls, and everyone betting in the market was aware of this information. Though there is no disparity between the bettors this information, a bettor could gain an edge by having the best sense of which of the polls were more accurate than others. By making these bets, he would push the betting market towards aggregating the poll results in the most accurate way. A better would be incentivized to do this, as they could profit off of a correct opinion until it was fully represented in the market price.

Indeed, we could think of good judgement (or accurately aggregating information) as another type of information, higher-order information. So, we should expect a prediction market to not only to be incorporating more information, but also to be incorporating more higher order information leading to prediction markets aggregating that information more accurately. Thus, we can classify prediction markets as epistemic superiors on their relevant proposition under normal circumstances.
Disagreeing with Epistemic Superiors

In his paper “Reflection and Disagreement,” Adam Elga discusses how we should be guided by our epistemic superiors. He gives the example of the weather person, to whom we defer completely. If the weather person says there is a 60% chance of rain today, my credence that there will be rain today becomes 60%. Elga calls this treating the weather person as an “expert.” When someone is an expert with regard to weather, “Conditional on her having probability x in any weather-proposition, my probability in that proposition is also x” (2). As Elga writes, this means deferring to the expert on two accounts: Information and judgement. By deferring to the weather person with respect to information, we admit that she has more information (regarding weather) than we have. By deferring to the weather person with respect to judgement, we admit that she has a better manner of forming opinions (regarding weather) than us.

Presumably, some people or things should be treated as experts and some people or things should not be. If a person or thing does deserve to be treated as an expert on a given domain, we should defer to their credence. If a person or thing does not deserve expert treatment, perhaps they or it should play some other role in our credence formulation. As Elga points out, there are two obvious ways the forecaster could cease to be an expert, either by failing to have more information or in failing to have better judgement. If I knew that the weather person’s radar was broken, and thus her information was corrupted, she would cease to be an expert. Similarly, if she were very drunk such that her weather judgment was inhibited, she would also cease to be an expert.

Given the previous discussion of EMH and prediction markets, I will assume that they are, in general, incorporating more information and better judgment into their suggested credence than a given individual is. Thus, a prediction market is an expert with regard to its topic, and one should normally defer to it. However, just as there are situations in which a weather person fails to be an expert with regard to weather, we might expect that there are situations in which prediction markets cease to be experts. Let’s discuss a few potential situations.

Information Errors
For any market to function, it must be sufficiently thick, as opposed to being a thin market, one with few buyers and sellers. If I set up a prediction market on a subject and only allow three of my friends to bet on it, the market would have only as much information as the three of my friends have, and thus could not be considered an expert. For my purposes, I will limit the following discussion to thick, functioning prediction markets, though thin markets would constitute exceptions to my arguments here.

One initial first information error is that you might be privy to important insider information that has not been incorporated into prediction market prices. If that were the case, the prediction market would cease being an expert to you on account of it lacking knowledge you have. However, whether there is insider information that has not been incorporated into the price is a question of whether prediction markets are efficient in the semi-strong sense or strong sense. As has been discussed, stock markets have not been found to be efficient in the strong sense. However, in the stock market, buying and selling stocks based on insider information is illegal under insider trading laws. In many prediction markets, which are not regulated by the US Securities and Exchange Commission, this is not the case. So, one might expect more insider information is incorporated. Indeed, there is a substantial question of whether the insider information you possess is not already part of the price. If it is, your disagreement with the suggested credence of the prediction market may not be justified.

So, there is a substantial question of whether your “insider information” has not already been incorporated into the suggested credence of the prediction market. However, let’s assume that the information is actually something only you possess. How then, should the prediction market be treated? In this case, Elga argues that we can treat the prediction market as a “guru.” In guru cases, rather than accepting a credence unconditionally as we do with an expert, we can accept their credence conditionally. We can formalize this following Elga. Let H represent a given proposition and P’ represent the prediction market’s probability function.

In the expert prediction market case:
P (H| Prediction Market has P’) = P’ (H)

In the guru prediction market case where “X” is your insider information:
P(H|Prediction Market has P’) = P’ (H| X)

Thus, Elga advises we conditionalize the guru’s probability on our unique information. So we truly have unique information, we can conditionalize the prediction market’s implied credence on it to form our optimal credence.

How exactly this conditionalization should work could be simple or complex. If you were the doctor of a presidential candidate and, after an appointment with the candidate, you came to the conclusion that the candidate had a terminal illness and would die before the election, P’(H | X) would be near zero, as we would obviously expect the prediction market to suggest something similar if the information was known.
However, if you were a friend of a presidential candidate, and the candidate told you: “I have just decided that tomorrow I will be announcing new policy X. I have not told anyone besides you.” How could you conditionalize a prediction market’s suggested credence on this information? It is hard to say. Perhaps you could see if any other candidate had announced a similar policy, or look at how the policy was polling, to try to get a sense of whether your candidate friend more or less likely to be elected after he or she announces the policy. Yet, this will always require some degree of guesswork and judgement. At the same time, this problem is common to conditionalizing on other types of evidence. Let us assume that your P(raintomorrow)=.5, and your friend said, “What would your credence be if I told you my Dad guaranteed it would rain tomorrow?” You might have some sense of this depending on what you knew about your friend’s father, but some degree of fuzziness here seems inevitable.

Another possibility the market might have anomalies or biases which would allow for a rational disagreement with its suggested credence. The most commonly discussed bias in prediction markets is called “Favorite-Long-Shot Bias.” The bias is an empirical phenomenon; bettors have been known to over-value “long-shot” bets relative to favored ones. For example, the 1/50 horse at the horse race might actually perform closer to 1/100. Both rational expectations and behavioral explanations have been proposed to explain this phenomenon as it violates EMH. The nature of the explanations themselves is not relevant to the current discussion, but the existence of favorite-longshot bias does suggest that if a prediction market is suspected to be manifesting the bias, one should treat the market as a guru and conditionalize the market’s implied credence on the bias:
P (Long-Shot | Prediction Market has P’) = P’ (Long-Shot | Long-Shot Bias)
We may not be able to conditionalize perfectly, but we could look to the typical effect size of long-shot bias in similar prediction markets, and try to work towards the conditional probability from there. Thus, the bias can at least be mitigated.

A final information error might be that a prediction market is being fed bad information by a manipulating bettor, who, knowing that people were using the prediction market inform their view about the future, seeks to manipulate the prediction market. Hanson discusses this potential problem in 2007 paper, “A Manipulator Can Aid Prediction Market Accuracy.” As the title implies, Hanson comes to the surprising conclusion that manipulators not only do not impede the functioning of a prediction market, they make it more accurate. As Hanson suggests, we can think of a potential manipulator as adding “noise” into the market. Because of this noise, the expected return on accurate information increases, thereby attracting more investors. With more investors, more information comes into the market, making it more accurate. However, if you did suspected that other investors were not capitalizing on the manipulator and correcting the market, you could also conditionalize on the suggested credence of the market towards what the suggested credence would look like without a manipulator.

Though there are potential information related risks, they seem to be sufficiently uncommon that one should not expect them in normal circumstances. Insider information is possible, though may not actually be truly non-public. Prediction market biases may exist, but can be accounted for. Market manipulation seems counterproductive. Information related errors should be looked for, but they are not able to diminish the established informative power of prediction markets.

Judgment Errors
The second type of errors, judgment errors, relate to the ways in which information is aggregated. Markets are one way of aggregating information, but there are others, like deliberation. What if you came to your credence as part of a deliberating group, incorporating the knowledge of many into your credence? Let us even assume, for the sake of argument, that your deliberating group had access to all the same information as all of the bettors in a given prediction market. If your credence differs from the credence suggested from the betting market, would it be rational to conciliate? Cass Sunstein takes on this issue in a 2006 paper, “Deliberating Groups Versus Prediction Markets.” Sunstein points out that this is a particularly important case, as many of our decisions and credences come about through deliberation with others.

Why should we expect this deliberative process to be desirable, especially relative to prediction markets? Indeed, as Sunstein argues, we should not expect deliberation to work better. We should actually expect it to be a less efficient way to aggregate information. Sunstein focuses primarily on two reasons: Group members failing to disclose what they know out of deference to the public information announced by others and social pressures leading members to not dissent from the group. As Sunstein writes, “Groups often amplify rather than correct individual errors; emphasize shared information at the expense of unshared information; fall victim to cascade effects; and tend to end up in more extreme positions in line with the predeliberation tendencies of their members” (192-3). On the other hand, prediction markets provide potential financial reward for individually held information and contrarian opinions, succeeding exactly where deliberation fails. Indeed, the profit motives makes uncommon knowledge especially profitable, where social, deliberative situations make group approved, desirable information most valuable. Thus, deliberation is, on average, a worse method of aggregation. One still ought to defer to the judgment of the prediction market.

What about if you create another information aggregating mechanism to inform your credence on a given issue that you think may outperform a prediction market? There have been two notable recent attempts at this: Nate Silver in his elections forecasting and Philip Tetlock’s Good Judgement Project.

In 2008, Nate Silver rose to prominence by using Bayesian statistical techniques to aggregate poll results, leading to highly accurate electoral predictions. Silver’s work provides a case study in whether advanced statistical techniques can aggregate information more effectively a prediction market. In 2009, economist David Rothschild tested Silver’s prediction against those suggested by a leading prediction market at the time, Entrade. He concludes, “I demonstrate that early in the cycle and in not-certain races debiased prediction market-based forecasts provide more accurate probabilities of victory and more information than debiased poll-based forecasts” (895). Rothschild’s technique is interesting. When he debiases Silver’s results, they become more accurate than the raw prediction market results. But, when he accounts for the favorite-longshot bias discussed earlier in the prediction market results, the debiased prediction market becomes most accurate. To put this into Elga’s framework, the prediction market makes for a better guru than Silver.

Silver addresses the study directly in his book, The Signal and the Noise. He takes some issues with Rothschild’s methodology, that he debiases the prediction market results, and, more importantly, that the prediction markets move in response to Silver’s poll aggregation. Yet nevertheless, as Silver writes, “Over the long run, however, the aggregate forecast has often beaten even the very best individual forecast.” Silver is skeptical of the current state of prediction markets, thinking that there is not yet enough competition and the markets are still relatively thin, but is open to their potential superiority. Thus, Silver asserts the earlier caveat, that the current set of prediction markets, given the current legal restrictions on them, may suffer from the market thinness discussed.

Philip Tetlock’s work on forecasting has also become an interesting potential challenge to prediction markets. In Tetlock’s Good Judgement Project, he sought out people who were outstanding at predicting the future over a number if years. He called this group “superforecasters.” In his book on the subject, Superforcasting, Tetlock describes testing the results of teams of superforecasters against prediction markets. His results: “Teams of ordinary forecasters beat the wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And superteams beat prediction markets by 15% to 30%” (207). The result is fairly surprising, given the power of prediction markets. But, it is perhaps not entirely fair. As Tetlock admits, “I can already hear the protests from my colleagues in finance that the only reason the superteams beat the prediction markets was that our markets lacked liquidity: real money wasn’t at stake and we didn’t have a critical mass of traders. They may be right” (207). Interestingly, the argument is very similar to that of Silver, that the relatively small scale of current prediction markets suggests they are not operating as well as they could be. So, if you are a superforecaster working with a team of other superforecasters, perhaps your group’s combined judgement is sufficiently better than current prediction markets that you need not defer to them. But, this may cease to be true if better prediction markets were developed.
So even between the narrow cases of Silver and Tetlock, the practitioners themselves are skeptical of their own ability to beat more robust prediction markets. And, this makes sense. As soon as a strategy develops an edge on prediction markets, they are incentivized to bet on that information until their information is fully incorporated into the betting market’s price. Indeed, a betting market can, as is actually encouraged, to subsume all other deliberation mechanisms into it until it obtains maximal accuracy.
Staying Steadfast Against Your Superiors

Let us assume you do not adopt the credence suggested by a prediction market because you wish to remain steadfast and think the credence suggested by the prediction market is incorrect. Bryan Frances discusses a similar epistemic situation in his piece, “Philosophical Renegades,” where an amateur astronomer retains her belief that Jupiter has fewer than 10 moons even after the vast majority of professional astronomers have come to believe the planet has over 200 moons. The astronomer has no concrete reason to reject the opinions of the expert astronomer community, but perhaps would say she expects that the others are making a mistake. This is not unlike the situation one is in disagreeing with a prediction market, as the prediction market is likely aggregating all available evidence in an effective manner. To some extent, the rationality of retaining one’s credence in the face of this disagreement depends upon how much one knows about prediction markets. If he or she were unaware of their epistemic virtues, the disagreement may be justified. But, if he or she understood prediction markets, their disagreement may be blatantly irrational.

There is another interesting dimension to disagreeing with prediction markets, whether that be because you remained steadfast or because you have conditionalized on the suggested credence of the prediction market. In either of these situations, you should see yourself as having the opportunity to arbitrage. Let us assume, for example, that a prediction market suggests that the chance of Donald Trump being elected is P(.42), as PredictIt suggests at the time of writing. Let us also assume that your credence in Donald Trump being reelected is (.1). Given your credence of (.1), you would be rational to take a bet at 9/1 odds or better that Trump is not reelected. A prediction market at P(.42) would offer odds at 11/8. Indeed, if you really believe your credence is (.1), this should be seen as a profitable strategy over the long run. If you do not have an adversity to betting, then you should bet. Even if you treat the prediction market as a guru and conditionalize against its suggested probability, rationality would still suggest you bet against the market, as it would have positive expected value.

The disagreement literature discusses the different ways in which we should engage with our epistemic peers, inferiors, and superiors. As I have shown, we should look to prediction markets as our epistemic superiors, and as experts or gurus in Elga’s sense.

I see this as being action guiding in a number of ways. First, if we wish to have more accurate credences about future events, we should create larger scale prediction markets. Second, when we have access to sufficiently thick prediction markets, we should defer to their suggested credences. To the extent one disagrees with a prediction market suggested credence, they should bet in the market as they would have a positive expected return.

Works Cited
Christensen, David Phiroze, and Jennifer Lackey. The Epistemology of Disagreement: New Essays. Oxford University Press, 2016.
Elga, Adam. “Reflection and Disagreement.” Nous, vol. 41, no. 3, Sept. 2007, pp. 478–502., doi:10.1111/j.1468-0068.2007.00656.x.
Fama, Eugene F. “Short-Term Interest Rates as Predictors of Inflation.” The American Economic Review, vol. 65, no. 3, June 1975, pp. 269–282., doi:10.1787/157052064225.
Fama, Eugene F. “The Behavior of Stock-Market Prices.” The Journal of Business, vol. 38, no. 1, 1965, pp. 34–105., doi:10.1086/294743.
Fama, Eugene F., and Burton G Malkiel. “Efficient Capital Markets: A Review of Theory and Empirical Work.” The Journal of Finance, vol. 25, no. 2, May 1970, pp. 383–417., doi:10.2307/2325486.
Frances, Bryan. Disagreement. Polity, 2014.
Hanson, Robin, and Ryan Oprea. “A Manipulator Can Aid Prediction Market Accuracy.” Economica, vol. 76, no. 302, 2009, pp. 304–314., doi:10.1111/j.1468-0335.2008.00734.x.
Hanson, Robin. “Decision Markets.” IEEE Intelligent Systems, vol. 14, no. 3, 1999, pp. 16–20.
Malkiel, Burton G. “The Efficient Market Hypothesis and Its Critics.” Journal of Economic Perspectives, vol. 17, no. 1, 2003, pp. 59–82., doi:10.1257/089533003321164958.
Rothschild, David. “Forecasting Elections.” Public Opinion Quarterly, vol. 73, no. 5, 2009, pp. 895–916., doi:10.1093/poq/nfp082.
Silver, Nate. The Signal and the Noise. Penguin, 2013.
Sunstein, Cass R. “Deliberating Groups versus Prediction Markets (or Hayek's Challenge to Habermas).” Episteme, vol. 3, no. 3, 2006, pp. 192–213., doi:10.3366/epi.2006.3.3.192.
Tetlock, Philip E., and Dan Gardner. Superforecasting: The Art and Science of Prediction. Random House, 2016.

bottom of page