Archive

Archive for the ‘hedge funds’ Category

Correlated trades

One major weakness of quantitative trading is that it’s based on the concept of how correlated various instruments and instrument classes are. Today I’m planning to rant about this, thanks to a reader who suggested I should. By the way, I do not suggest that anything in today’s post is new- I’m just providing a public service by explaining this stuff to people who may not know about it.

Correlation between two things indicates how related they are. The maximum is 1 and the minimum is -1; in other words, correlation ignores the scale of the two things and concentrates only on the de-scaled relationship. Uncorrelated things have correlation 0.

All of the major financial models (for example Modern Portfolio Theory) depend crucially on the concept of correlation, and although it’s known that, at a point in time, correlation can be measured in many different ways, and even given a choice, the statistic itself is noisy, most of the the models assume it’s an exact answer and never bother to compute the sensitivity to error. Similar complaints can just as well be made to the statistic “beta”, for example in the CAPM model.

To compute the correlation between two instruments X and Y, we list their returns, defined in a certain way, for a certain amount of time for a given horizon, and then throw those two series into the sample correlation formula. For example we could choose log or percent returns, or even difference returns, and we could look back at 3 months or 30 years, or have an exponential downweighting scheme with a choice of decay (explained in this post), and we could be talking about hourly, daily, or weekly return horizons (or “secondly” if you are a high frequency trader).

All of those choices matter, and you’ll end up with a different answer depending on what you decide. This is essentially never mentioned in basic quantitative modeling texts but (obviously) does matter when you put cash money on the line.

But in some sense the biggest problem is the opposite one. Namely, that people in finance all make the same choices when they compute correlation, which leads to crowded trades.

Think about it. Everyone shares the same information about what the daily closes are on the various things they trade on. Correlation is often computed using log returns, at a daily return horizon, with an exponential decay weighting typically 0.94 or 0.97. People in the industry thus usually agree more or less on the correlation of, say, the S&P and crude.

[I’m going to put aside the issue that, in fact, most people don’t go to the trouble of figuring out time zone problems, which is to say that even though the Asian markets close earlier in the day than the European or U.S. markets, that fact is ignored in computing correlations, say between country indices, and this leads to a systematic miscalculation of that correlation, which I’m sure sufficiently many quantitative traders are busy arbing.]

Why is this general agreement a problem? Because the models, which are widely used, tell you how to diversify, or what have you, based on their presumably perfect correlations. In fact they are especially widely used by money managers, so those guys who move around pension funds (so have $6 trillion to play with in this country and $20 trillion worldwide), with enough money involved that bad assumptions really matter.

It comes down to a herd mentality thing, as well as cascading consequences. This system breaks down at exactly the wrong time, because after everyone has piled into essentially the same trades in the name of diversification, if there is a jolt on the market, those guys will pull back at the same time, liquidating their portfolios, and cause other managers to lose money, which results in that second tier of managers to pull back and liquidate, and it keeps going. In other words, the movements among various instruments become perfectly aligned in these moments of panic, which means their correlation approaches 1 (or perfectly unaligned, so their correlations approach -1).

The same is true of hedge funds. They don’t rely on the CAPM models, because they are by mandate trying to be market neutral, but they certainly rely on a factor-model based risk model, in equities but also in other instrument classes, and that translates into the fact that they tend to think certain trades will offset others because the correlation matrix tells them so.

These hedge fund quants move around from firm to firm, sharing their correlation matrix expertise, which means they all have basically the same model, and since it’s considered to be in the realm of risk management rather than prop trading, and thus unsexy, nobody really spends too much time trying to make it better.

But the end result is the same: just when there’s a huge market jolt, the correlations, which everyone happily computed to be protecting their trades, turn out to be unreliable.

One especially tricky thing about this is that, since correlations are long-term statistics, and can’t be estimated in short order (unless you look at very very small horizons but then you can’t assume those correlations generalize to daily returns), even if “the market is completely correlated” on one day doesn’t mean people abandon their models. Everyone has been trained to believe that correlations need time to bear themselves out.

In this time of enormous political risk, with the Eurozone at risk of toppling daily, I am not sure how anyone can be using the old models which depend on correlations and sleep well at night. I’m pretty sure they still are though.

I think the best argument I’ve heard for why we saw crude futures prices go so extremely high in the summer of 2008 is that, at the time, crude was believed to be uncorrelated to the market, and since the market was going to hell, everyone wanted “exposure” to crude as a hedge against market losses.

What’s a solution to this correlation problem?

One step towards a solution would be to stop trusting models that use greek letters to denote correlation. Seriously, I know that sounds ridiculous, but I’ve noticed a correlation between such models and blind faith (I haven’t computed the error on my internal estimate though).

Another step: anticipate how much overcrowding there is in the system. Assume everyone is relying on the same exact estimates of correlations and betas, take away 3% for good measure, and then anticipate how much reaction there will be the next time the Euroleaders announce a new economic solution and then promptly fail to deliver, causing correlations to spike.

I’m sure there are quants out there who have mastered this model, by the way. That’s what quants do.

At a higher perspective, I’m saying that we need to stop relying on correlations as fixed over time, and start treating them as volatile as prices. We already have markets in volatility; maybe we need markets in correlations. Or maybe they already exist formally and I just don’t know about them.

At an even higher perspective, we should just figure out a better system altogether which doesn’t put people’s pensions at risk.

Categories: finance, hedge funds

Two pieces of good news

I love this New York Times article, first because it shows how much the Occupy Wall Street movement has resonated with young people, and second because my friend Chris Wiggins is featured in it making witty remarks. It’s about the investment bank recruiting machine on college campuses (Yale, Harvard, Columbia, Dartmouth, etc.) being met with resistance from protesters. My favorite lines:

Ms. Brodsky added that she had recently begun openly questioning the career choices of her finance-minded friends, because “these are people who could be doing better things with their energy.”

Kate Orazem, a senior in the student group, added that Yale students often go into finance expecting to leave after several years, but end up staying for their entire careers.

“People are naïve about how addictive the money is going to be,” she said.

Amen to that, and wise for you to know that! There are still plenty of my grown-up friends in finance who won’t admit that it’s a plain old addiction to money keeping them in a crappy job where they are unhappy, and where they end up buying themselves expensive trips and toys to try to combat their unhappiness.

And here’s my friend Chris:

“Zero percent of people show up at the Ivy League saying they want to be an I-banker, but 25 and 30 percent leave thinking that it’s their calling,” he said. “The banks have really perfected, over the last three decades, these large recruitment machines.”

Another piece of really excellent new: Judge Rakoff has come through big time, and rejected the settlement between the SEC and Citigroup. Woohoo!! From this Bloomberg article:

In its complaint against Citigroup, the SEC said the bank misled investors in a $1 billion fund that included assets the bank had projected would lose money. At the same time it was selling the fund to investors, Citigroup took a short position in many of the underlying assets, according to the agency.

“If the allegations of the complaint are true, this is a very good deal for Citigroup,” Rakoff wrote in today’s opinion. “Even if they are untrue, it is a mild and modest cost of doing business.”

A revised settlement would probably have to include “an agreement as to what the actual facts were,” said Darrin Robbins, who represents investors in securities fraud suits. Robbins’s firm, San Diego-based Robbins Geller Rudman & Dowd LLP, was lead counsel in more settled securities class actions than any other firm in the past two years, according to Cornerstone Research, which tracks securities suits.

Investors could use any admissions by Citigroup against the bank in private litigation, he said.

This raises a few questions in my mind. First, do we really have to depend on a randomly chosen judge having balls to see any kind of justice around this kind of thing? Or am I allowed to be hopeful that Judge Rakoff has now set a precedent for other judges to follow, and will they?

Second, something that came up on Sunday’s Alt Banking group meeting. Namely, how many more cases are there that the SEC hasn’t even bothered with, even just with Citigroup? I’ve heard the SEC was only scratching the surface on this, since that’s their method.

Even if they only end up getting $285m, plus the admission that they did wrong by their clients, could the SEC go back and prosecute them for 30 other deals for 30x$285m = $8.55b? Would that give us enough leverage to break up Citigroup and start working on our “Too Big to Fail” problems? And how about the other banks? What would this litigation look like if the SEC were really trying to kick some ass?

Categories: #OWS, finance, hedge funds

Who takes risks?

One of my readers sent me a link to this blogpost by James Wimberley, which talks intelligently about safety nets and their secondary effects (it also has a nifty link to the history of bankruptcy laws in the U.S.).

I want to hone in on one aspect he describes, namely how, in spite of people in the U.S. considering themselves entrepreneurial, we are not so much. His theory is that it’s because of a lack of safety net: people are worried about losing their health insurance so they don’t leave the safety of their job. Here’s Wimberley’s chart of entry density, defined as the rate of registration of new limited liability companies per thousand adults of working age, by country:

The question of who takes risks is interesting to me, and made me think about my experiences in my various jobs. In fact this dovetails quite well with another subject I want to post on soon, namely who learns from mistakes; I have a theory that people who don’t take risks also don’t learn from mistakes well. But back to risktakers.

It kind of goes without saying that people in academics are not risk-taking entrepreneurs, but I’ll say it anyway – they aren’t. In fact it was one reason I wanted out- I’m much more turned on by risks than the people I met inside academics. In particular I don’t want to have the same job with the same conditions for the rest of my life, guaranteed. I want adventure and variation and the excitement of not knowing what’s next. When I went to a hedge fund I thought I would find my peeps.

However, most of the people I worked with at D.E. Shaw were really not risk takers at all, in spite of the finance cowboy image that they are so proud of. In fact, these were deeply risk averse people who wanted total control over their and their children’s destinies.

Moreover, the students I meet in finance programs (I took a few classes at Columbia’s when I knew I was leaving academic math) and who hope to someday work at JP Morgan are some of the most risk averse people ever. They are essentially trying to lock in a huge salary in return for working like slaves for a huge system.

Fine, so finance attracts people who are risk averse (and love money). That may be the consequence of its reputation and its age. So where are the risk takers? They must be some other field. How about startups?

What has surprised me working at a startup is that a majority of them are also not what I’d consider risk takers. There are a few though. These few tend to be young men with no families. Kind of the “Social Network” model of college aged boys working out of their dorm rooms. The women tend to be unmarried.

This is completely in line with Wimberley’s theory of safety nets, since it seems like once these men find a wife and have a kid they settle (speaking in general) into a risk averse mode. Once the women get married they tend to leave altogether.

In fact I’m kind of an oddball in that I’m married and have three kids and I actually love risk taking. Part of this is that I get to depend on my husband for health insurance, but that’s clearly not the only factor, since you’d expect lots of women whose husbands had steady jobs to be joining startups, but that’s not true.

I also have a feeling that the enormous amount of effort people tend to put into proving their credentials has something to do with all of this- when you take risks you are without title, you win or lose on your own luck and hard work. For a culture with a strong desire to be credentialed that’s a tough one.

I don’t really have a conclusion today but I’m thinking that the story is slightly more complicated than just safety nets. I feel like maybe it starts out as a safety net issue but then it becomes a cultural assumption.

Categories: hedge funds, rant

Topology of financial modeling

After my talk on Monday there were lots of questions and comments, which is always awesome (will blog the contents soon).

One person in the audience asked me if I’d ever heard of CompTop, which I hadn’t. And actually, even though I vaguely understand what they’re talking about, I still don’t understand it sufficiently to blog about it- but it reminds me of something else which I would like to blog about, and which combines topology and modeling.

Maybe they’re even the same thing! But if so (especially if so), I’d like to get my idea down onto electronic paper before I read theirs. This is kind of like my thing about not googling something until you’ve tried to work it out for yourself.

So here’s the setup. In different fields in finance, there’s a “space” you work in. I worked in Futures, which you’ve heard of because when they talk about the price of barrels of oil going up (or maybe down, but you don’t hear about it as much when that happens), they are actually talking about futures prices. This also happens with basic food prices such as corn and wheat; corn and oil are linked of course through ethanol production. There are also futures on the S&P (or any other major stock index), bonds, currencies, other commodities, or even options on stock indices.

The general idea, which is given away by the name, is that when you buy a futures contract, you are placing a bet on the future price of something. Futures were started as a way for farmers to hedge their risks when they were growing food. But clearly other things have happened since then.

There’s a way of measuring the dimension of this space of instruments, which is less trivial than counting them. For example, there is a “2 year U.S. bond” future as well as a “5 year U.S. bond future” and you may guess (and you’d be right) that these don’t really represent independent dimensions.

Indeed there’s a concept of independence which one can use coming from statistics (so, statistical independence), which is pretty subjective in that it depends on what time period and how much data you use to measure it (and lately we’ve seen less independence in general). But even so, you can go blithely forward and count how many dimensions your space has, and you generally got something like 15, at least before the credit crisis hit. This process is called PCA, and I’ll write a post on it sometime.

Depending on which instruments you counted, and how liquid you expected them to be, you could get a few more “independent” instruments, but you also may be fooling yourself with idiosyncratic noise caused by those instruments being not very liquid. So there are some subtleties.

Once you have your space measured in terms of dimension, you can choose a basis and look at things along the basis vectors. You can see how your different models behave, for example. You might see how the bond model you worked on places no bet on the basis vectors corresponding to lean hog futures.

That made me wonder the following question. If we can measure the space of instruments, can we also measure the space of models? Is this some kind of dual? If so, is there some kind of natural upper bound on the number of (independent) models we could ever have which all make profit?

Note there’s also a way of making sure that models are statistically independent, so this part of the question is well-defined. But it’s not clear what property of the space of instruments you are measuring when you ask for a model on that space which “makes profit”.

Another related question is whether such a question can really only be asked at a given time horizon (if it can be asked at all). I’ll explain.

The horizon of a model is essentially how long you expect a given bet to last in terms of time. For example, a weekly horizon model is something you’d typically only see on a slow-moving instrument class like bonds. There are plenty of daily models on equities, but there are also incredibly hyper fast “high frequency” models, say on currencies, which care about the speed of light and how different computers in the same room, being at different internal temperatures, can’t place consistent timestamps on ticker data.

These different horizons have such different textures, it makes me wonder if the question of an upper bound on the number of profitable models, if true, is true at each horizon.

Another related question: what about topological weirdness inside the space of instruments? If you plot some of this (take as a baby model three instruments that are essentially independent, choose a time horizon, and plot the simultaneous returns) the main characteristic you’ll see is that it’s a bounded blob. But inside that blob are certainly inconsistencies; in particular the density is not everywhere the same. Is the lack of consistency a signal that there’s a model there? Does the market know about holes, for example? Maybe not, which would mean that the space of (profitable) models is perhaps better understood as a space whose basis consists of something like “holes in the instrument space”, rather than a dual.

This is verging on something like what CompTop is talking about. Maybe. I’ll have to go read what they’re doing now.

Categories: finance, hedge funds

Bayesian regressions (part 1)

I’ve decided to talk about how to set up a linear regression with Bayesian priors because it’s super effective and not as hard as it sounds. Since I’m not a trained statistician, and certainly not a trained Bayesian, I’ll be coming at it from a completely unorthodox point of view. For a more typical “correct” way to look at it see for example this book (which has its own webpage).

The goal of today’s post is to abstractly discuss “bayesian priors” and illustrate their use with an example. In later posts, though, I promise to actually write and share python code illustrating bayesian regression.

The way I plan to be unorthodox is that I’m completely ignoring distributional discussions. My perspective is, I have some time series (the x_i‘s) and I want to predict some other time series (the y) with them, and let’s see if using a regression will help me- if it doesn’t then I’ll look for some other tool. But what I don’t want to do is spend all day deciding whether things are in fact student-t distributed or normal or something else. I’d like to just think of this as a machine that will be judged on its outputs. Feel free to comment if this is palpably the wrong approach or dangerous in any way.

A “bayesian prior” can be thought of as equivalent to data you’ve already seen before starting on your dataset. Since we think of the signals (the x_i‘s) and response (y) as already known, we are looking for the most likely coefficients \beta_i that would explain it all. So the form a bayesian prior takes is: some information on what those \beta_i‘s look like.

The information you need to know about the \beta_i‘s is two-fold. First you need to know their values and second you need to have a covariance matrix to describe their statistical relationship to each other. When I was working as a quant, we almost always had strong convictions about the latter but not the former, although in the literature I’ve been reading lately I see more examples where the values (really the mean values) for the \beta_i‘s are chosen but with an “uninformative covariance assumption”.

Let me illustrate with an example. Suppose you are working on the simplest possible model: you are taking a single time series and seeing how earlier values of x predict the next value of x. So in a given update of your regression, y= x_t and each x_i is of the form x_{t-a} for some a>0.

What is your prior for this? Turns out you already have one (two actually) if you work in finance. Namely, you expect the signal of the most recent data to be stronger than whatever signal is coming from older data (after you decide how many past signals to use by first looking at a lagged correlation plot). This is just a way of saying that the sizes of the coefficients should go down as you go further back in time. You can make a prior for that by working on the diagonal of the covariance matrix.

Moreover, you expect the signals to vary continuously- you (probably) don’t expect the third-from recent variable x_{t-3} to have a positive signal but the second-from recent variable x_{t-2} to have a negative signal (especially if your lagged autocorrelation plot looks like this). This prior is expressed as a dampening of the (symmetrical) covariance matrix along the subdiagonal and superdiagonal.

In my next post I’ll talk about how to combine exponential down-weighting of old data, which is sacrosanct in finance, with bayesian priors. Turns out it’s pretty interesting and you do it differently depending on circumstances. By the way, I haven’t found any references for this particular topic so please comment if you know of any.

Data science: tools vs. craft

I’ve enjoyed how many people are reading the post I wrote about hiring a data scientist for a business. It’s been interesting to see how people react to it. One consistent reaction is that I’m just saying that a data scientist needs to know undergraduate level statistics.

On some level this is true: undergrad statistics majors can learn everything they need to know to become data scientists, especially if they also take some computer science classes. But I would add that it’s really not about familiarity with a specific set of tools that defines a data scientist. Rather, it’s about being a craftsperson (and a salesman) with those tools.

To set up an analogy: I’m not a chef because I know about casserole dishes.

By the way, I’m not trying to make it sound super hard and impenetrable. First of all I hate it when people do that and second of all it’s not at all impenetrable as a field. In fact I’d say it the other way: I’d prefer smart nerdy people to think they could become data scientists even without a degree in statistics, because after all basic statistics is pretty easy to pick up. In fact I’ve never studied statistics in school.

To get to the heart of the matter, it’s more about what a data scientist does with their sometimes basic tools than what the tools are. In my experience the real challenges are things like

  1. Defining the question in the first place: are we asking the question right? Is an answer to this question going to help our business? Or should we be asking another question?
  2. Once we have defined the question, we are dealing with issues like dirty data, too little data, too much data, data that’s not at all normally distributed, or that is only a proxy to our actual problem.
  3. Once we manhandle the data into a workable form, we encounter questions like, is that signal or noise? Are the errorbars bigger than the signal? How many more weeks or months of data collection will we need to go through before we trust this signal enough to bet the business on it?
  4. Then of course we go back to: should we have asked a different question that would have not been as perfect an answer but would have definitely given us an answer?

In other words, once we boil something down to a question in statistics it’s kind of a breeze. Even so, nothing is ever as standard as you would actually find in a stats class – the chances of being asked a question similar to a stats class is zero. You always need to dig deeply enough into your data and the relevant statistics to understand what the basic goal of that t-test or statistic was and modify the standard methodology so that it’s appropriate to your problem.

My advice to the business people is to get someone who is really freaking smart and who has also demonstrated the ability to work independently and creatively, and who is very good at communicating. And now that I’ve written the above issues down, I realize that another crucial aspect to the job of the data scientist is the ability to create methodology on the spot and argue persuasively that it is kosher.

A useful thing for this last part is to have broad knowledge of the standard methods and to be able to hack together a bit of the relevant part of each; this requires lots of reading of textbooks and research papers. Next, the data scientist has to actually understand it sufficiently to implement it in code. In fact the data scientist should try a bunch of things, to see what is more convincing and what is easier to explain. Finally, the data scientist has to sell it to everyone else.

Come to think of it the same can be said about being a quant at a hedge fund. Since there’s money on the line, you can be sure that management wants you to be able to defend your methodology down to the tiniest detail (yes, I do think that being a quant at a hedge fund is a form of a data science job, and this guy woman agrees with me).

I would argue that an undergrad education probably doesn’t give enough perspective to do all of this, even though the basic mathematical tools are there. You need to be comfortable building things from scratch and dealing with people in intense situations. I’m not sure how to train someone for the latter, but for the former a Ph.D. can be a good sign, or any person that’s taken on a creative project and really made something is good too. They should also be super quantitative, but not necessarily a statistician.

What are the chances that this will work?

September 13, 2011 Comments off

One of the positive things about working at D.E. Shaw was the discipline shown in determining whether a model had a good chance of working before spending a bunch of time on it. I’ve noticed people could sometimes really use this kind of discipline, both in their data mining projects and in their normal lives (either personal lives or with their jobs).

Some of the relevant modeling questions were asked and quantified:

  1. How much data do you expect to be able to collect? Can you pool across countries? Is there proxy historical data?
  2. How much signal do you estimate could be in that data? (Do you even know what the signal is you’re looking for?)
  3. What is the probability that this will fail? (not good) That it will fail quickly? (good)
  4. How much time will it take to do the initial phase of the modeling? Subsequent phases?
  5. What is the scope of the model if it works? International? Daily? Monthly?
  6. How much money can you expect from a model like this if it works? (takes knowing how other models work)
  7. How much risk would a model like this impose?
  8. How similar is this model to other models we already have?
  9. What are the other models that you’re not doing if you do this one, and how do they compare in overall value?

Even if you can’t answer all of these questions, they’re certainly good to ask. Really we should be asking questions like these about lots of projects we take on in our lives, with smallish tweaks:

  1. What are the resources I need to do this? Am I really collecting all the resources I need? What are the resources that I can substitute for them?
  2. How good are my resources? Would better quality resources help this work? Do I even have a well-defined goal?
  3. What is the probability this will fail? That it will fail quickly?
  4. How long will I need to work on this before deciding whether it is working? (Here I’d say write down a date and stick to it. People tend to give themselves too much extra time doing stuff that doesn’t seem to work)
  5. What’s the best case scenario?
  6. How much am I going to learn from this?
  7. How much am I going to grow from doing this?
  8. What are the risks of doing this?
  9. Have I already done this?
  10. What am I not doing if I do this?