Home > finance, modeling, statistics > Black Scholes and the normal distribution

Black Scholes and the normal distribution

March 12, 2013

There have been lots of comments and confusion, especially in this post, over what people in finance do or do not assume about how the markets work. I wanted to dispel some myths (at the risk of creating more).

First, there’s a big difference between quantitative trading and quantitative risk. And there may be a bunch of other categories that also exist, but I’ve only worked in those two arenas.

Markets are not efficient

In quantitative trading, nobody really thinks that “markets are efficient.” That’s kind of ridiculous, since then what would be the point of trying to make money through trading? We essentially make money because they aren’t. But of course that’s not to say they are entirely inefficient. Some approaches to removing inefficiency, and some markets, are easier than others. There can be entire markets that are so old and well-combed-over that the inefficiencies (that people have thought of) have been more or less removed and so, to make money, you have to be more thoughtful. A better way to say this is that the inefficiencies that are left are smaller than the transaction costs that would be required to remove them.

It’s not clear where “removing inefficiency” ends and where a different kind of trading begins, by the way. In some sense all algorithmic trades that work for any amount of time can be thought of as removing inefficiency, but then it becomes a useless concept.

Also, you can see from the above that traders have a vested interest to introduce new kinds of markets to the system, because new markets have new inefficiencies that can be picked off.

This kind of trading is very specific to a certain kind of time horizon as well. Traders and their algorithms typically want to make money in the average year. If there’s an inefficiency with a time horizon of 30 years it may still exist but few people are patient enough for it (I should add that we also probably don’t have good enough evidence that they’d work, considering how quickly the markets change). Indeed the average quant shop is going in the opposite direction, of high speed trading, for that very reason, to find the time horizon at which there are still obvious inefficiencies.

Black-Scholes

A long long time ago, before Black Monday in 1987, people didn’t know how to price options. Then Black-Scholes came out and traders started using the Black-Scholes (BS) formula and it worked pretty well, until Black Monday came along and people suddenly realized the assumptions in BS were ridiculous. Ever since then people have adjusted the BS formula. Everyone.

There are lots of ways to think about how to adjust the formula, but a very common one is through the volatility smile. This allows us to remove the BS assumption of constant volatility (of the underlying stock) and replace it with whatever inferred volatility is actually traded on in the market for that strike price and that maturity. As this commenter mentioned, the BS formula is still used here as a convenient reference to do this calculation.  If you extend your consideration to any maturity and any strike price (for the same underlying stock or thingy) then you get a volatility surface by the same reasoning.

Two things to mention. First, you can think of the volatility smile/ surface as adjusting the assumption of constant volatility, but you can also ascribe to it an adjustment of the assumption of a normal distribution of the underlying stock. There’s really no way to extricate those two assumptions, but you can convince yourself of this by a thought experiment: if the volatility stays fixed but the presumed shape of the distribution of the stocks gets fatter-tailed, for example, then option prices (for options that are far from the current price) will change, which will in turn change the implied volatility according to the market (i.e. the smile will deepen). In other words, the smile adjusts for more than one assumption.

The other thing to mention: although we’ve done a relatively good job adjusting to market reality when pricing an option, when we apply our current risk measures like Value-at-Risk (VaR) to options, we still assume a normal distribution of risk factors (one of the risk factors, if we were pricing options, would be the implied volatility). So in other words, we might have a pretty good view of current prices, but it’s not at all clear we know how to make reasonable scenarios of future pricing shifts.

Ultimately, this assumption of normal distributions of risk factors in calculating VaR is actually pretty important in terms of our view of systemic risks. We do it out of computational convenience, by the way. That and because when we use fatter-tailed assumptions, people don’t like the answer.

Categories: finance, modeling, statistics
  1. Charles
    March 12, 2013 at 7:43 am

    I’m always surprised by the prevalence of the normal distribution assumption. It works for population samples, but I don’t know why it’s assumed to work for VaR.

    Like

    • Leon Kautsky
      March 12, 2013 at 11:59 am

      “I’m always surprised by the prevalence of the normal distribution assumption. It works for population samples, but I don’t know why it’s assumed to work for VaR.”

      It’s assumed to work for VaR because regulators use VaR. If you tell a regulator that with 95% probability your losses are bounded by negative infinity they will pull the plug on your operation – even though this as true for you as it is for your (lying) competitors who report that their VaRs are finite.

      Investors can understand VaR, and while investors (typically) loove volatility, VaR gives lie to how much they loove downside risk. Hence you want to keep it finite.

      Like

  2. Zathras
    March 12, 2013 at 8:55 am

    “Ultimately, this assumption of normal distributions of risk factors in calculating VaR is actually pretty important in terms of our view of systemic risks. We do it out of computational convenience, by the way. That and because when we use fatter-tailed assumptions, people don’t like the answer.”

    If they don’t like the answer of a VaR calculation for a non-normal distribution, they REALLY don’t like the answer of a CVaR calculation for a non-normal distribution. From my experience, infinity is never an acceptable answer! 🙂

    Like

    • March 12, 2013 at 8:56 am

      I agree that putting together assumptions so that the expected value is infinity is a bad choice.

      Like

  3. FogOfWar
    March 12, 2013 at 9:19 am

    Love this post!

    FoW

    Like

  4. March 12, 2013 at 9:30 am

    I am a neuroscientist and don’t really have any knowledge of the finance business. From outside it looks like financiers demand money be generated on an exponential curve, such that profits increase dramatically every quarter to infinity. If firms don’t demonstrate that curve, investors may yank funds. Yet, my science training tells me no system we have ever measured increases exponentially to infinity. First, the resources to allow that to happen don’t exist. Second, no system we have ever measured displays an infinite positive feedback pattern. It is good to see you speak of investment as behaving according to a normal distribution.

    Like

    • March 12, 2013 at 9:32 am

      You misunderstood me. I don’t claim anything in the real world behaves so predictably. But you’re right that claims of exponential growth are ridiculous.

      Like

      • March 15, 2013 at 4:22 am

        Not sure if I agree, although it depends on how you define exponential, I think. Even a bank savings account with a fixed rate of interest will exhibit non-linear upward growth, due to compounding. This is why Ben Franklin left a tidy sum in his will, and ended up founding half of Philadelphia’s major institutions (post office, fire station, university (now UPenn)).

        Like

        • March 16, 2013 at 6:41 am

          It seems to me that your Ben Franklin example represents a restriction of range problem. Yes, if you put funds into an account with compound interest and never touch it, over time it will continue to grow. However, if you remove funds it will decrease. If you shorten your view enough, you can make any picture you like with data. If you take the long view however, patterns over time that are responses to the many variables which can affect a fund come into focus.

          Like

        • March 16, 2013 at 7:45 am

          Yes agreed, it depends on factors such as how long the money is compounded. I wanted to point out that there is nothing wrong ethically, with expecting your money to compound exponentially. The fact that these people do this with riskier instruments than a straight bond or savings account merely clouds the underlying principle at work, namely that of compounding. By definition, this compounding takes some time. How that money is put to use afterwards if it compounds successfully, and/or the motivations of the person doing the compounding, determines whether or not it’s an ethical act. The fact of compunding is neutral. it’s just a characteristic of money, and a compensation for the opportunity cost of using money now vs. later.

          Like

        • March 16, 2013 at 8:05 am

          You raise the question of ethics. I just had an interesting conversation with a epistemologist who studies the ethics of thinking. I suppose this is an issue for every field.

          Like

    • Leon Kautsky
      March 12, 2013 at 11:44 am

      The economy as a whole has grown exponentially since the 1800s.

      Like

  5. Josh
    March 12, 2013 at 9:39 am

    I like the post.

    But you ignored a very major point until your closing comment “people don’t like the answer”. I know you are aware of it because you’ve addressed it in past blogs but it shouldn’t be glossed over here.

    Normal distributions are not used for pricing options because options traders want accurate answers. Normal distributions are used in VaR because management wants comforting (that is understated) answers. They want to take risk and more accurate estimates would just get in the way.

    Yes, it is easier computationally but computers are powerful and quants like to do complex computations (sometimes overly so), The reason normal distributions are used is bad incentives/motives mostly on the part of the people who employ the modelers. The modelers know better but, as Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

    Most CEOs are probably smart enough to know better too. But they also like the results and if the taxpayer has their back, why should they behave differently?

    Like

    • March 12, 2013 at 9:40 am

      All true. I had to bring my 4-year-old to daycare so I didn’t have time for all of that.

      Like

  6. Jonathan
    March 12, 2013 at 9:54 am

    I have a historical correction. Options were priced using non-normal distributions long before 1987.

    Fischer Black (of Black-Scholes fame wrote an article “Fact and Fantasy in the Use of Options” Financial Analysts Journal 1975. He says “Options that are way out of the money tend to be overpriced”. (Presumably by “overpriced” he means “trade a higher volatilities” – that is, a smile). By using the word “overpriced” he could be impliying that he believes the higher volatilities in the tails are not justified but I think he just means “overpriced according to my model” (which he realizes is imperfect because later in the article he says
    “A stock that drops sharply in price is likely to show a higher volatility in the future (in percentage terms) than a stock that rises sharply in price.” That implies that he does not believe the true distribution of prices is log-normal and that there should be a skew in volatilities. In addition, the article has a lot of discussion about time-varying volatility and in many ways anticipates GARCH. If volatility varies over time, there should be a non-normal distribution and, as Fischer’s article attests, options were priced this way in 1975.

    Like

    • March 12, 2013 at 10:00 am

      As I understand it, there was not a huge amount of trading because people were unsure of how to price options. But what trading there was likely did not follow the formula and indeed probably acted more like insurance, which typically overprices disaster scenarios. I don’t think we are really disagreeing.

      Like

      • jonathan
        March 12, 2013 at 10:48 am

        I don’t know if you and I disagree.

        But I disagree with your statement

        “traders started using the Black-Scholes (BS) formula and it worked pretty well, until Black Monday came along and people suddenly realized the assumptions in BS were ridiculous”

        In addition to Fischer Black’s article, Engle and others were publishing on time-varying volatility. Computers were capable of Monte Carlo simulations to estimate options prices. Models are better today. But people realized that the true distribution was fat-tailed and were adjusting for it.

        I do agree that part of the skew is an insurance charge (and taking advantage of retail net buying of options) but I believe that’s true today, too.

        I don’t mean to suggest that the skew didn’t increase in Oct. 87. It did.

        I don’t mean to pick on you. It’s people like Nassim Taleb who imply that people really believe in normal distributions that have gotten me sensitive about this issue.

        Like

        • March 12, 2013 at 10:51 am

          As I understand it, right before Black Monday (but not way before), the inferred volatility smile was in fact pretty straight. I haven’t actually seen the data and backed out the smile myself, but that’s what I’ve heard. That would imply that “the market” actually “believed” in the formula for some amount of time.

          Like

        • jonathan
          March 12, 2013 at 11:00 am

          I know some knowledgeable people have made that claim.

          My recollection is quite different but I was not very involved in equity options (or equity index options) at that time.

          It would be interesting to look at the data.

          Like

        • Bindicap
          March 12, 2013 at 11:22 pm

          FWIW, I heard the same story about the vol smile after Black Monday from some prof of math finance who was old enough to have a clue about it. Could still be apocryphal though; I am unable to check.

          Like

        • Eric Titus
          March 13, 2013 at 11:55 am

          You might be interested in Doug MacKenzie’s work on the performativity of Black-Scholes. Basically, he finds that one reason it “fit” so well easily on is because arbitragers were “punishing” those who deviated from model predictions. The way he discusses it seems fairly close to how you are talking about it here. He also has some graphs on the fit of the BS model in there.

          Like

        • Jonathan
          March 13, 2013 at 12:59 pm

          @Eric,

          Thank you very much for the reference to MacKenzie. His work does look fascinating in general (my only problem is the size of my “to read” queue but I’ll try to get to it soon).

          But, I don’t see the specific paper you are referring to. Could you please provide a reference?

          “Punishing” seems like a curious choice of words but I can easily believe that traders who believed strongly in the model would trade in a way that would push prices into line with model whether or not the model conformed to reality. And, I can even believe that they could make money doing so so that it would be a self-reinforcing behavior for a while until reality made itself felt in a particularly emphatic way such as the crash.

          Fascinating example and even more so since there is the potential for parallels in the sub-prime market which he also has written about.

          So many papers, so little time.

          But I would really like to see his Black Scholes paper so please do provide a reference.

          Like

        • Eric Titus
          March 14, 2013 at 2:26 am

          The paper is “Constructing a Market, Performing Theory” by Mackenzie (Donald, typo) and Millo. It’s a neat paper and very readable, although that should be taken with a grain of salt coming from someone who reads sociology/science studies articles all day.

          Punishing probably isn’t the right word, but you did have traders arbitraging when option values deviated from theoretical predictions, which keeps the price in line unless something dramatic happens.

          He has written on the sub-prime market, but I am more skeptical of his argument. He has some interesting stuff on how the ways ABSs and CDOs were evaluated diverged and led to overrating. But in that case there was much more pressure from the banks to undervalue mortgages, so it is less a case of ideas getting out of hand.

          Like

        • jonathan
          March 14, 2013 at 8:37 am

          Eric,

          Thanks a lot. I have read most of the “Big, Bad Wolf and the Rational Market.”. It is quite interesting but frustratingly did not have any pictures or data, just anecdotes. SO, I’ll track down the “Constructing a market”.

          I will look at the papers on the more recent crisis, too.

          Like

  7. Lukasz
    March 12, 2013 at 10:30 am

    The Normal VaR is a bit of a non issue. New financial regulation requires banks to use the historical VaR for computation of IM ( initial margin) for instance. The history is suppose to include at least one period of financial distress. There is still a lot of questions to answer for instance how do you apply the historical changes to today’s data. It is getting close to quite a “good” measure. The problem with all this models is that even if you have a very adequate marginal distribution of all factors you have no way of understanding their dependency especially if it is not linear. This is especially important in case of defaults and dependency of the various market factors on default of various institution. Also there is no good way to model real world default probabilities. Historical probabilities are clearly difficult to obtain while the ratings based are BS.

    Like

    • Josh
      March 12, 2013 at 10:54 am

      But the broader point is that if people have incentives to create good models, they will (and they will realize their models are imperfect and generally be cautious about using them).

      If they have incentives to create bad models, that is what they will do.

      Like

  8. Lukasz
    March 12, 2013 at 11:37 am

    I agree. The pricing model have obvious incentive the better price you take the more money you make. It is much harder with risk. Especially risk for very high risk level like 98% VAR. Perhaps I am fine with this. They are also a very interesting questions to answer. Imagine I offer you to play a game where I toss a fair coin and give you $N if its head and nothing if its tail. How much are you willing to pay to participate? Expected outcome is $N/2 so this is also a “fair” price. But I personally will be “happy” to pay as much with N = $10 much less inclined if N=$1000000. On the other hand if I do it with your money I will be happy to follow an “fair” price. How do you create incentives for me not to?

    Like

  9. March 12, 2013 at 10:31 pm

    I am continually puzzled by these claims (e.g. you, Taleb) that normality of anything is necessarily assumed in computation of VaR. Maybe that is true somewhere, but that is just not what I have observed in practice, and in any event it is not something inherent in the definition of VaR itself.

    Like

    • Bindicap
      March 12, 2013 at 11:18 pm

      Yeah, it doesn’t have to be and is not normal at many places (most?).

      VAR isn’t so sensitive to vol smile anyway; portfolio correlation assumptions are much more important. The issue is more that your analytic VAR model infrastructure should be tied to your fully accurate valuation and risk models or you have an infrastructure nightmare.

      But I agree with Lukasz that VAR from observed historical returns has more importance anyway. Plus scenario analysis.

      Like

    • March 13, 2013 at 6:04 am

      I worked at RiskMetrics. Believe me, the distributions were normal. And I’m not advocating it, I’m complaining about it, but this was for the “independent third part risk assessment”.

      And I agree that historical VaR is better, but only if you have a time series that goes back far enough for it to make sense. If you have recently introduced a new kind of instrument to market you just don’t have enough information to understand what can go wrong. Or if you have a new CDS same thing, but in that case you should know from experience that CDS returns are very fat-tailed.

      Like

      • March 13, 2013 at 7:19 am

        Ok but then that is a statement about RIskMetrics, not about VaR. Again, the VaR I’ve seen, and which was actually used (for capital, trading limits, etc), made no such assumption.

        This is not to say that I think VaR is hunky-dory, of course. In fact the typical implementation in practice probably has data problems etc that are far more serious than those committed by assuming normality 🙂 (or for that matter, by limitations of timeseries due to products being too young)

        Like

        • Jonathan
          March 13, 2013 at 9:08 am

          I think historical VaR is at least as problematic as normality.

          Should I believe based on the past 10 years that there is substantial risk of a sharp drop in interest rates but little risk of a sharp rise? I think it is even more problematic in a multi-asset setting where we have inevitably seen only a small fraction of potential occurrence and where almost all of the important risk problems lie.

          I wish I had a solution but I don’t. I think a great deal of judgment is inevitably necessary but I recognize that that is most prone to willful misuse. My best advice is to use more than one approach and still not to rely too heavily on your results. Another case of “I don’t know and you don’t either” to which I would add that the biggest risk is not being aware of that.

          Or, to quote James Thurber, It is better to know some of the questions than all of the answers.

          Like

      • Pseudonym
        March 14, 2013 at 8:55 am

        RiskMetrics still make unrealistic assumptions; indeed, in RiskMetrics’ non-normal model implementation, all risk factors must exhibit the same degree of non-normality.

        This does not address the original commenter’s accurate observation that no internal models are as simplistically calibrated.

        Like

        • March 14, 2013 at 8:56 am

          When people actually care about risks, they make more accurate assumptions. Riskmetrics customers don’t demand accurate assumptions. Enough said.

          Like

  10. Lukasz
    March 13, 2013 at 9:23 am

    Not sure anyone still reads. I don’t care about RiskMatrics. RM has zero incentive to build adequate model. Also I think it is fundamentally ill posed problem. I simply don’t know why I should use probability measure calibrated to one period of history or another also why would I use history at all. There is an interesting paper by Ross: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1918653 perhaps something in this direction. It will still not solve the dependency problem but it is a start to have more adequate measures.

    Like

  11. March 14, 2013 at 5:04 am

    1 – Mathematicians and quants still haven’t come up with a practical alternative for an entire market – it’s easy for a single risk factor to use a non-normal distribution but we have to model risk for a portfolio of several thousand risk factors. If you have a better alternative, I’m all ears – but it’s easy just to criticise risk managers from the sidelines.

    2 – Modelling complex payoffs and strategies has a much greater impact on a risk metric than any assumption about the underlying distribution of risk factors. Model a CDO as a bond and no matter how clever your stats, your answers are meaningless. Ignore the volatility surface – ditto.

    3 – Historical simulation is dangerous. It says that any outcome that has not already occurred cannot occur. Any risk manager who is comfortable with that assumption needs to find another job.

    Like

  12. Finster
    March 14, 2013 at 5:34 am

    Every disaster in (military) history started with a bloody assumption.

    Like

  13. March 14, 2013 at 7:25 am

    The EMH is a tautology. It says that the “average investor” canot beat the market consistently. But what is “the market”? Well, its the outcome of the interactions of the investors therein. That means that it’s just the result of the “average” of investor decisions.

    So, what the EMH says is that the average investor cannot beat the market. But since “the market” is basically just the outcome of the average of investment decisions what the “theory” really says is that the market cannot beat the market. Or, that the average investor cannot beat the average investor.

    It’s a tautology and its meaningless. I gave a presentation on this the other day. People find it very amusing when you work through this and point it out. The language of the EMH is also meaningless when you see it for what it is.

    Like

    • jonathan
      March 14, 2013 at 8:47 am

      There is one problem with your reasoning, the premise.

      The Efficient Market Hypothesis (of which there are a few versions) does not refer to the average investor. It says “NO ONE can expect to outperform the market, risk adjusted”. Not just the average investor but that public information should not be useful because market prices should already incorporate it.

      Of course, ex post some will do better, some worse (and as you point out,on average equal before costs) but that should not be predictable or repeatable. So, Warren Buffet should not exist (or he has just been lucky, or used inside information).

      I am not saying that I believe in the EMH, just that it is not as easy to dismiss as you claim.

      Like

      • March 14, 2013 at 9:21 am

        Its not “no one”. Check out, for example, Malkiel’s 2003 paper. There he finds that about 75% of Large Cap. Equity firms didn’t beat the S&P over a ten year period. He takes this as proof of the EMH.

        “Throughout the past decade, about three-quarters of actively managed funds have failed to beat the index… Managed funds are regularly outperformed by broad index funds, with equivalent risk… The record of professionals does not suggest that sufficient predictability exists in the stock market or that there are recognizable and exploitable irrationalities sufficient to produce excess returns.” (Malkiel, 2003. Pp77-78)

        If you were correct then this would not be proof of the EMH because 25% of those firms had consistently beat the market over ten years. It is quite clear in this paper that the benchmark for “proof” of the EMH is that a larger number of firms don’t beat the market than do.

        This clearly indicates, beyond a shadow of a doubt, that despite what its proponents may say around the water-cooler and despite what many finance students are taught in class, in the journals the criteria put forward by the EMH proponents is NOT that no one can beat the market. Merely that you — the average investor — are not likely to beat the market.

        Like

        • March 14, 2013 at 9:32 am

          I am not a financier or in the financial field, so the details of your arguments surely go over my head. However, in the aggregate, it looks like what financiers are trying to do is find a way to create an infinite positive feedback loop such that profits will increase exponentially every quarter into infinity. Is this more or less accurate?

          Like

        • March 14, 2013 at 9:40 am

          Something like that, yes. But that’s sort of the nature of capitalism itself. However, finance capital tries to sever the tie between the accumulation of capital in the real economy — which is based on the expansion of productive capacity and rising standards of living — and the accumulation of money-capital on the financial side of the economy. This is why what appears to us as progress in the real economy appears to us as fantastic, fictitious nonsense on the finance side.

          Like

        • March 14, 2013 at 9:44 am

          Interesting. So is this goal, infinite profit, a key player in the cycle of booms and busts we see in the financial markets?

          Like

        • March 14, 2013 at 9:49 am

          Of course! Financial markets are very important. They ensure that money is distributed to firms so that the firms can make real investments — in machines, new technology etc. — and this leads to progress and rising standards of living.

          But when the financial markets become untethered from the real economy they just start generating bubbles and redistributing income. And that is precisely what we have seen in the US since the late 1970s and early 1980s.

          Like

        • March 14, 2013 at 9:51 am

          Thanks for that cogent, clear explanation.

          Like

        • Matt
          March 14, 2013 at 4:03 pm

          It’s not fair to say that financiers have models that assume eventual infinite profits and therefore, since infinite profits are not possible, the models are wrong. That would say all exponentially growing models are wrong, including things like US GDP or the human population. It’s true that at some eventuality the United States will no longer exist and the human population will die off, but for timeframes we care about (say 10-20 years) the models are reasonably accurate.

          I do not think the exponential profit assumption applies very well to individual companies, but in the post-war era and even before, returns of the stock market roughly follow compounded interest, i.e. exponential growth. The same is true if you kept money in a savings account for the last 50 years.

          What matters is whether the stock market’s profits as a whole justify its valuation vs. interest in other securities such as Treasuries. The valuation at the peak of the bubble in 2000 made sense if you used an extremely low discount rate. The stock market eventually got back to its peak valuation. However, you would have far more money by putting that money into Treasuries in 2000 and then going to stocks when they were cheap in 2009 and Treasuries yielded essentially nothing.

          Like

        • jonathan
          March 14, 2013 at 9:32 am

          I believe your standards of proof “beyond a shadow of a doubt” are somewhat faulty.

          I framed my statement carefully, I said that “NO ONE can EXPECT to outperform, risk-adjusted.” Just to be clear i also added that some will, ex post, outperform because there is performance variability.. So, to do an ex post test you need to assess that the number that outperform can be explained by chance.

          25% outperforming for 10 years is not surprising especially since 10-year records inevitably have selection bias since many underperforming funds don’t persist 10 years.

          Like

        • March 14, 2013 at 9:36 am

          Are you talking about your own standards of proof here? Because I’m not concerned with your standards of proof. I’m concerned with the standards of proof that the major EMH proponents require to prove their so-called theory in the major academic journals.

          Based on their standards of proof it is only required that MOST FIRMS will not beat the market in the long-term (in Malkiel’s paper: ten years). Stated differently this says that “on average firms will not beat the market”. That’s a tautology for the reasons I pointed out above.

          If you have different standards of proof that’s fine. I’m talking about the guys who actually created and disseminated this theory. For them the existence of Warren Buffet does not disprove their theory. But, at the end of the day, this makes their theory a tautology.

          Like

        • Matt
          March 14, 2013 at 4:18 pm

          It’s not a tautology if you frame the question in terms of serial correlation. The tautological question is whether most firms do not outpace the stock market. Since owning an index has very low transaction costs and active funds have higher transaction costs, you would expect most firms to do worse than the market since the market is the average of all pre-transaction-costs returns.

          But if a subsection of funds were better than the rest of funds, you wouldn’t just expect some funds would do better each year. You would expect the SAME funds would do better each year, i.e. serial correlation of relative returns.

          The evidence is somewhat mixed, but in general those funds that did well the last five years have as much chance of doing well next year as other funds. I do believe there are some investment managers which outperform the market over the long-term, such as Buffett and other value investors. To be one of these investors, they both have to act contrarian and, most importantly, have a buy-and-hold strategy over a long time-frame. The incentives for most funds are to do better this year since most investors move money into well-performing funds. Therefore they have incentives to try to guess what the market will do rather than the fundamental underlying valuation.

          That’s pretty complex for the average investor though and therefore, despite I don’t believe in the EMH, most investors should just invest in an index fund. I pretty much have all my savings in an index fund. But I don’t necessarily think prices “reflect all publicly available information” either. For those willing to act contrarially and have patience, there are differences between market value and fundamental value to be exploited.

          Like

  14. March 14, 2013 at 7:33 am

    I believe that that markets are efficient like you mentioned and that’s how we are able to make money. This is especially true about emerging markets such as India. I used to never believe in technical analysis but now see more of people making money only through that. Also, it’s in the interest of big players to keep things inefficient at the cost of retail investors who are designed to make losses.

    Like

  15. John Hall
    March 14, 2013 at 10:15 am

    I’ll admit that I don’t understand how anyone can reasonably be using normal VaR. Even under the assumption of normal log changes in securities prices, that means that prices and profits are lognormally distributed. Perhaps for a daily horizon you can easily approximate the lognormal with the normal, but any longer horizon the numbers increasing diverge. It really isn’t terribly hard to use a Cornish-Fisher approximation for VaR to calculate the log normal VaR (or CVaR).

    Also, there was some discussion above related to infinite values for VaR or CVaR. I have noticed this occurring when you let a t distribution have degrees of freedom below 4 (due to infinite kurtosis. below 2, above 1, the problem is infinite variance and undefined kurtosis). In general, I try to constrain the dof>4 in order to avoid this problem. If securities have infinite variance or kurtosis, then it makes it impossible to compare portfolios. I’d rather constrain and compare with the knowledge that I could have a blow up than throw up my hands.

    Like

  16. Bagehot by-the-Bay
    March 14, 2013 at 12:53 pm

    I second a historical correction. Thorpe was pricing warrants (which can be treated as options) in the late 1960s. The Black–Scholes model was published in 1973. As Emanual Derman points out, what happened after the 1987 crash was that the once-familiar “curve” turned into more of a “smile.” 🙂

    Like

  17. March 15, 2013 at 1:23 pm

    Are any of these models subjected to null hypothesis testing before being used?

    Like

  18. E.L. Wisty
    March 24, 2013 at 10:39 am

    Reblogged this on Pink Iguana.

    Like

  1. March 13, 2013 at 2:21 pm
  2. March 21, 2013 at 1:33 am
Comments are closed.