Continuing Conversation... Campbell Harvey on Randomness, Skill, and Investment Strategies
By Amy Willis
In this episode, EconTalk host Russ Roberts sat down with Duke University’s Campbell Harvey to discuss his research on various investment strategies.
Here are some follow-up questions….
1. At approximately the twenty-eight minute mark, Roberts asks Harvey whether he thinks “black swans” can inform investment decisions. (Brush up on what is meant by “black swans” in this episode with Nassim Taleb.) How does Harvey respond, and to what extent are you convinced by his answer? Would such a strategy correctly take your preferences into account?
2. How does Harvey’s critique of the 2 sigma standard compare to Ed Leamer’s critique of the state of econometrics research? Consider how research in economics might be different from finance? Do investment managers have a greater stake in research methods than academics?
Do investment managers have a greater stake in research methods than academics?Tweet
3. This week’s episode offered a lot of tidbits for the casual investor, not just institutional ones. What did you take away from this week’s conversation that you believe can help in your personal financial decisions? Explain.
READER COMMENTS
Jerm
Mar 27 2015 at 3:37pm
“How does Harvey’s critique of the 2 sigma standard compare to Ed Leamer’s critique of the state of econometrics research? ”
Both critiques highlight the desire for the researcher to find something of importance, sifting through the data to find something of significance (in both the popular and statistical manner). But Dr. Harvey indicates that the financial researchers genuinely do not understand the errors of their ways (like in the jellybean comic), while Dr. Leamer seems to suggest that economics researchers aren’t doing enough to enunciate the shades of gray to their customers (especially the press). Do economics researchers really know better (yet paint with a broad brush in order to get their ideas out there)? Maybe. Does it matter, considering that it would lead to the same types of problems pointed out by Dr. Harvey? Maybe not.
“Consider how research in economics might be different from finance? Do investment managers have a greater stake in research methods than academics? ”
I think that academic researchers have a much greater stake in research methods than investment managers, due to the way that academic research is published. An academic researcher creates a paper, which spends a year or so as a “working paper”, normally available for anyone to download and read. They present that paper at conferences and university talks, all of which are populated by half a dozen “research methods” specialists, each waiting for that ONE SLIDE in the Powerpoint presentation that contains all of the math and data results. All of those assaults hammer the paper into something a little more solid. A little tougher. That paper gets sent off to a journal, where it all happens again.
Sometimes that process makes/overlooks some errors, but it does a pretty good job. The working paper turns into a journal article, full of jargon and dense calculus. The glacier of economics research moves an inch. After only two years of work.
For investment managers, it seems like they come up with an idea and then run with it. See where it goes, and hope it works out. They get evaluated based on results, which are not perfectly related to research methods.
This is why Warren Buffet can be seen as a superior animal by one group of people and a statistical anomaly by another.
It was a little refreshing to hear Dr. Harvey indicate that the investment managers actually do care about the validity of their ideas. That they would welcome the type of skewering common to an economics seminar, because it would help them avoid a bad strategy that could RUIN them. Maybe they could borrow the methods wonks from the economics conferences. Those guys love to point out problems with sampling, bias, math, and all that jazz.
[less-than and greater-than symbols changed to quotation marks to improve html–Econlib Ed.]
jw
Mar 27 2015 at 5:48pm
Jerm,
Just like there are good and bad academic research papers, there are good and bad systematic investment strategies.
Some asset managers are extremely careful with their development methodology and do a fine job. Others, not so much. As Prof. Harvey points out, they can go for a while and still look fine before their fatal flaw (and there are many to choose from) hits and they blow up.
Back in 2007, Goldman Sachs ran a $12B fund called Global Alpha. In a month or two it lost 30%. Their PhD’s said that it was a “22 sigma event”.
In a normal world, if you made one trade a second, it would take you longer than the history of the universe to see a “22 sigma event”. Their PhD’s obviously did not understand what they were doing, as evidenced both by that statement and their performance.
However, I doubt GS allowed prospective investors a deep enough look inside of their trading models to do the due diligence required to determine if they knew what they were doing ahead of time.
It is a very difficult problem to evaluate others’ trading systems as you are almost always dealing with closely held trade secrets. The internal risk managers should also be doing this, but risk managers get paid a fraction of what traders get paid…
LKor
Mar 28 2015 at 1:49pm
I’ve only listened to the episode once, but I took that investors want to avoid negatively skewed strategies, where ‘negatively skewed’ means a strategy that accepts higher tail risk in exchange for more (short-term) upside.
What I still don’t have a great understanding of is: how do you spot negative skew?
jw
Mar 28 2015 at 3:12pm
Managing tail risk is a very complicated subject. He mentions bootstrapping (randomly using past data), but that in itself can be complicated as there are several methods. And by definition, you will never see the black swan that may get you, so you have to add some safety factor to your historical data (making some returns significantly worse than they are, for example).
All of this is inter-related to your risk tolerance. The more carefully you test may result in lower leverage, smaller position sizes (a science all to itself) and thereby producing lower expected returns. Lower returns means fewer asset allocations and smaller bonuses (I do not share the Prof’s opinion that reputational risk outweighs bonuses at many firms…).
So one needs to be very careful in the design of the system but still produce acceptable returns. If you haven’t guessed by now, it is very difficult.
Luckily, due to the many different potential strategies and instruments available, a portfolio of well designed trading strategies can reduce a good deal of risk. Of course, finding that many more good strategies is significantly more difficult.
Jerm
Mar 28 2015 at 8:24pm
JW, that’s why academic research and investment strategies are completely different animals.
Good research involves transparent analysis of data, including correct citations to help a reader place the research claims in a proper context. Good research should be replicable and explicitly state the assumptions necessary to reach the conclusions.
Investment strategies are basically an art project. And even if the asset manager uses sound data techniques (and the remarks by Russ and Dr. Harvey indicate they do NOT), they still operate in a secret bubble.
That’s not good research. But they aren’t researchers. They are asset managers who are ultimately evaluated by whether or not they make enough profits.
jw
Mar 29 2015 at 10:29am
It is not quite that cut and dried. In systematic trading, there are choices to be made as to what proportion of IS to OOS data to be used, which methods of position sizing, which instruments, etc, but there are very quantifiable outcomes and very clear test results, if done properly.
Naturally, physics and engineering research are more quantifiable. However, many more assumptions are made and probably more statistical tools are available to choose from (each choice leading to different outcomes) in economics, psychology (as mentioned in the podcast) and the other softer sciences. I have also read a lot of nutritional academic research and much of it is worthless.
Please be careful to differentiate between academic research on investment and portfolio theory vs systematic trading. Much of that academic research also has great problems.
Take Bogle’s famous buy and hold index’s strategy in his 1999 book. The data was skewed by a huge bull market just before publication which made all of the numbers look much better. 15 years later, the SPY index fund has returned a much worse than in sample 4%/yr – and that is with a Fed fueled bull behind it. Just a couple of years ago the result would have been zero. We are near a record length bull market and as above I am wagering that we will be a lot lower in the next year or two, so equities may well be back to a zero return since 1999 point again.
And of course there were those two 50% drops where one would really, really have to trust the markets…
15 years is getting to be quite a long time horizon for investors who were in their 40’s and 50’s in 1999.
Lastly, did this rather mindless approach (put all of your 401K into index funds and trust the market) actually help to distort the market by just pouring money into the indexes that had to allocated only one way? This is always a risk. It certainly helped asset gatherers like Vanguard grow as they didn’t have to claim any skill.
jw
Mar 29 2015 at 11:31am
Addendum: If you had bought the Vanguard Balanced 60% equity/ 40% bonds fund (VBINX) in March 1999, your annualized return would have been 3.25% per year, again MUCH worse than Bogle’s in sample calculations.
Granted, you would have only 30% drawdowns vs 50% in just equities, but you would have given up some of your already lagging returns.
jw
Mar 29 2015 at 1:57pm
Add. 2: (Sorry…)
The magic of compound interest now works against Bogle because in order to get to his long term “buy and hold the equity index” average return of 8%/yr, after 16 years of 4% returns, you now need 12%/yr returns for the next 16 years to get there.
It has happened before, but from these lofty valuations, good luck.
Fred M
Apr 14 2015 at 8:27pm
A propos of the conversation, this story of Stan Druckenmiller’s early career might be interesting. His mentor in an bank reserch department made him boss and told him “you’re too young, too dumb and too inexperienced to know when not to charge”
http://images.businessweek.com/bloomberg/pdfs/BN_0850499D_041015_19334.pdf
Comments are closed.