A recent article from the Journal of Financial Planning, entitled “Post-Modern Portfolio Theory,” argues that: “If we periodically scrap our entire world view and play the skeptic, challenging even our most basic assumptions, the result will be progress.”
Perhaps. But challenging basic assumptions is a lot of work that in many circumstances is not likely to generate insights of great practical value. So I question whether revisiting fundamentals is always a good use of one’s time.
When it comes to the field of investing, however–and that’s the field being discussed in the article–a revisiting of fundamentals is very much needed. Most investing research is wonderful stuff that helps us all make better use of our earnings. But a good deal of investing research is conducted in exceedingly strange ways.
In most fields of study, research is conducted via a building-block approach. Researchers study a basic question, come up with some theories about it, test those theories, discover what is so, and then move to the next question in the logic chain. I’m certainly no expert on investing research, but what little I know about it suggests that investing research is often conducted according to a different set of procedural rules.
With some investing research, the general practice seems to be to develop a plausible-sounding assumption re some critical issue, and then build and build and build on that weak foundation as if it had been proven. Much effort is directed to putting to rest uncertainties over details of the theory, but little to examinations of whether the root assumption underlying the theory holds up to scrutiny. The end result often seems to be the generation of well-footnoted, confidently presented, peer-reviewed nonsense assertions.
I’m exaggerating. But not that much.
It is my study of the safe withdrawal rate (SWR) research that got me interested in these sorts of questions. What I am finding is that it is not just studies of safe withdrawal rates that employ la-la land assumptions that not only are not supported by the evidence available but which in fact could not be true because they are contrary to what reasonable people know just from making use of their common sense.
Here’s a quote from the article linked to above: “The problem is that while we have an elegant mathematical model for describing the perfect investment–called modern portfolio theory (MPT)–that model is wrong. Not wrong in the sense that the overall theory is no good, just wrong in the specific sense that it produces inefficient (and sometimes silly?) portfolios. And we’ve known it for decades.”
That’s how it is with safe withdrawal rates too, of course. The conventional SWR studies are not wrong in the sense that they are no good. There is value in them to those who are aware of the grave flaws of the assumptions employed in them. They are wrong in the sense that they do not identify withdrawal rates that are safe. Instead, they identify withdrawal rates that are risky and assert that they are safe. That’s not at all the same thing. It’s silly, yes. It’s not just silly, though. It’s dangerous too.
It’s dangerous because the investors who read these studies generally have no way of knowing how absurd the assumptions are on which they are based. I know from conversations that I have had with other investors that a good number take the findings of SWR studies more or less on faith. Most investors making use of these studies don’t have the time or skill or inclination to study the assumptions of the studies with great care. They presume that the researchers set things up in sensible ways, and presume that if there were serious doubts about the validity of the assumptions used, warning language in the studies would point out these grave flaws. That’s the way it works in many other fields in which research is conducted, is it not?
What you get with a good bit of investment research is the stuffy tone you expect to see in serious research combined with comic-book assumptions. The result is cartoon findings set forth in artfully constructed tables. I’ve seen this up close in the area of SWR analysis, the authors of the article cited above are saying that something similar happens in general risk-analysis studies, and it is my impression that this phenomenon applies in regard to the research conducted on a number of other important investing questions too.
Investment research matters. Money advisors rely on the accuracy of the research they are making reference to when advising investors as to what to do with their money. But at least some of the people generating the research seem not too concerned about the reality that bad research assumptions translate into life setbacks for the investors making use of the research to guide their decisions as to how to invest money they need to meet mortgage payments and cover medical bills.
The article cited above quotes Dr. Frank Sortino of the Pension Research Institute as observing that: “The business of providing financial advice is driven by marketing and not technology…the incentive [to change] is not there so long as people are making money.” That’s a clue to solving the mystery at issue here. I don’t think it is the entire explanation, however.
My overall take is that, given the importance of investing research, the standards that apply to it are exceedingly strange standards. Investors have a right to demand better.


feed twitter twitter facebook