Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
1st ETF 2nd ETF 3rd ETF

Big Ideas

These blog entries offer some big ideas of lasting value relevant for investing and trading.

Retirement Income Modeling Risks

How much can the (in)accuracy of retirement portfolio modeling assumptions affect conclusions about the safety of retirement income? In their December 2014 paper entitled “How Risky is Your Retirement Income Risk Model?”, Patrick Collins, Huy Lam and Josh Stampfli examine potential weaknesses in the following retirement income modeling approaches:

  • Theoretically grounded formulas – often complex with rigid assumptions.
  • Historical backtesting – the future will be like the past, requiring long samples.
  • Bootstrapping (reshuffled historical returns) – provides alternate histories but does not preserve return time series characteristics (such as serial correlation), and requires long samples.
  • Monte Carlo simulation with normal return distributions – sensitive to changes in assumed return statistics and often does not preserve empirical return time series characteristics.
  • Monte Carlo simulation with non-normal return distributions – complex and often does not preserve empirical return time series characteristics.
  • Vector autoregression – better reflects empirical time series characteristics and can incorporate predictive variables, but requires estimation of regression coefficients and is difficult to implement.
  • Regime-switching simulation (multiple interleaved return distributions representing different market states) – complex, requiring estimation of many parameters, and typically involves small samples in terms of number regimes.

They focus on retirement withdrawal sustainability (probability of shortfall) as a risk metric and risks associated with modeling (future asset returns), inflation and longevity assumptions. They employ a series of examples to demonstrate how an overly simple model may distort retirement income risk. Based on analysis and this series of examples, they conclude that: Keep Reading

A Few Notes on A Random Walk Down Wall Street

In the preface to the eleventh (2015) edition of his book entitled A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing, author Burton Malkiel states: “The message of the original edition was a very simple one: Investors would be far better off buying and holding an index fund than attempting to buy and sell individual securities or actively managed mutual funds. …Now, over forty years later, I believe even more strongly in that original thesis… Why, then, an eleventh edition of this book? …The answer is that there have been enormous changes in the financial instruments available to the public… In addition, investors can benefit from a critical analysis of the wealth of new information provided by academic researchers and market professionals… There have been so many bewildering claims about the stock market that it’s important to have a book that sets the record straight.” Based on a survey of financial markets research and his own analyses, he concludes that: Keep Reading

Crash Protection Strategies

How can investors protect portfolios from crashes across asset classes? In the November 2014 version of his paper entitled “Tail Risk Protection in Asset Management”, Cristian Homescu describes tail (crash) risk metrics and summarizes the body of recent research on the effectiveness and costs of alternative tail risk protection strategies. The purpose of these strategies is to mitigate or eliminate investment losses during rare events adverse to portfolio holdings. These strategies typically bear material costs. He focuses on some strategies that may be profitable and hence useful for more than crash protection. Based on recent tail risk management research and some examples, he concludes that: Keep Reading

Overview of Equity Factor Investing

Is equity factor investing a straightforward path to premium capture and diversification? In their October 2014 paper entitled “Facts and Fantasies About Factor Investing”, Zelia Cazalet and Thierry Roncalli summarize the body of research on factor investing and provide examples to address the following questions:

  1. What is a risk factor?
  2. Do all risk factors offer attractive premiums?
  3. How stable and robust are these premiums?
  4. How can investors translate academic risk factors into portfolios?
  5. How should investors allocate to different factors?

They define risk factor investing as the attempt to enhance returns in the long run by capturing systematic risk premiums. They focus on the gap between retrospective (academic) analysis and prospective portfolio implementation. They summarize research on the following factors: market beta, size, book-to-market ratio, momentum, volatility, liquidity, carry, quality, yield curve slope, default risk, coskewness and macroeconomic variables. Based on the body of factor investing research and examples, they conclude that: Keep Reading

Static Smart Beta vs. Many Dynamic Proprietary Factors

Which is the better equity investment strategy: (1) a consistent portfolio tilt toward one or a few factors widely accepted, based on linear regression backtests, as effective in selecting stocks with above-average performance (smart beta); or, (2) a more complex strategy that seeks to identify stocks with above-average performance via potentially dynamic relationships with a set of many proprietary factors? In their September 2014 paper entitled “Investing in a Multidimensional Market”, Bruce Jacobs and Kenneth Levy argue for the latter. Referring to recent research finding that many factors are highly significant stock return predictors in multivariate regression tests, they conclude that: Keep Reading

Taming the Factor Zoo?

How should researchers address the issue of aggregate/cumulative data snooping bias, which derives from many researchers exploring approximately the same data over time? In the October 2014 version of their paper entitled “. . . and the Cross-Section of Expected Returns”, Campbell Harvey, Yan Liu and Heqing Zhu examine this issue with respect to studies that discover factors explaining differences in future returns among U.S. stocks. They argue that aggregate/cumulative data snooping bias makes conventional statistical significance cutoffs (for example, a t-statistic of at least 2.0) too low. Researchers should view their respective analyses not as independent single tests, but rather as one of many within a multiple hypothesis testing framework. Such a framework raises the bar for significance according to the number of hypotheses tested, and the authors give guidance on how high the bar should be. They acknowledge that they considered only top journals and relative few working papers in discovering factors and do not (cannot) count past tests of factors falling short of conventional significance levels (and consequently not published). Using a body of 313 published and 63 near-published (working papers) encompassing 316 factors explaining the cross-section of future U.S. stock returns from the mid-1960s through 2012, they find that: Keep Reading

Improving Established Multi-factor Stock-picking Models Is Hard

Is more clearly better in terms of number of factors included in a stock screening strategy? In the October 2014 draft of their paper entitled “Incremental Variables and the Investment Opportunity Set”, Eugene Fama and Kenneth French investigate the effects of adding to an established multi-factor model of stock returns an additional factor that by itself has power to predict stock returns. They focus on size, book-to-market ratio (B/M, measured with lagged book value), and momentum (cumulative return from 12 months ago to one month ago, with a skip-month to avoid systematic reversal). They consider a broad sample of U.S. stocks and three subsamples: microcaps (below the 20th percentile of NYSE market capitalizations); small stocks (20th to 50th percentiles); and, big stocks (above the 50th percentile). They perform factor-return regressions, and they translate regression results into portfolio returns by: (1) ranking stocks into fifths (quintiles) based on full-sample average regression-predicted returns; and, (2) measuring gross average returns from hedge portfolios that are long (short) the equally weighted quintile with the highest (lowest) expected returns. Finally, they perform statistical tests to determine whether whether the maximum Sharpe ratio for quintile portfolios constructed from three-factor regressions is realistically higher than those for two-factor regressions. Using monthly excess returns (relative to the one-month Treasury bill yield) for a broad sample of U.S. stocks during January 1927 through December 2013, they find that: Keep Reading

Better Four-factor Model of Stock Returns?

Are the widely used Fama-French three-factor model (market, size, book-to-market ratio) and the Carhart four-factor model (adding momentum) the best factor models of stock returns? In their September 2014 paper entitled “Digesting Anomalies: An Investment Approach”, Kewei Hou, Chen Xue and Lu Zhang construct the q-factor model comprised of market, size, investment and profitability factors and test its ability to predict stock returns. They also test its ability to account for 80 stock return anomalies (16 momentum-related, 12 value-related, 14 investment-related, 14 profitability-related, 11 related to intangibles and 13 related to trading frictions). Specifically, the q-factor model describes the excess return (relative to the risk-free rate) of a stock via its dependence on:

  1. The market excess return.
  2. The difference in returns between small and big stocks.
  3. The difference in returns between stocks with low and high investment-to-assets ratios (change in total assets divided by lagged total assets).
  4. The difference in returns between high-return on equity (ROE) stocks and low-ROE stocks.

They estimate the q-factors from a triple 2-by-3-by-3 sort on size, investment-to-assets and ROE. They compare the predictive power of this model with the those of the Fama-French and Carhart models. Using returns, market capitalizations and firm accounting data for a broad sample of U.S. stocks during January 1972 through December 2012, they find that: Keep Reading

Forget CAPM Beta?

Does the Capital Asset Pricing Model (CAPM) make predictions useful to investors? In his October 2014 paper entitled “CAPM: an Absurd Model”, Pablo Fernandez argues that the assumptions and predictions of CAPM have no basis in the real world. A key implication of CAPM for investors is that an asset’s expected return relates positively to its expected beta (regression coefficient relative to the expected market risk premium). Based on a survey of related research, he concludes that: Keep Reading

Snooping for Fun and No Profit

How much distortion can data snooping inject into expected investment strategy performance? In their October 2014 paper entitled “Statistical Overfitting and Backtest Performance”, David Bailey, Stephanie Ger, Marcos Lopez de Prado, Alexander Sim and Kesheng Wu note that powerful computers let researchers test an extremely large number of model variations on a given set of data, thereby inducing extreme overfitting. In finance, this snooping often takes the form of refining a trading strategy to optimize its performance within a set of historical market data. The authors introduce a way to explore snooping effects via an online simulator that finds the optimal (maximum Sharpe ratio) variant of a simple trading strategy by testing all possible integer values for strategy parameters as applied to a set of randomly generated daily “returns.” The simple trading strategy each month trades a single asset by (1) choosing a day of the month to enter either a long or a short position and (2) exiting after a specified number of days or a stop-loss condition. The randomly generated “returns” come from a source Gaussian (normal) distribution with zero mean. The simulator allows a user to specify a maximum holding period, a maximum percentage stop loss, sample length (number of days), sample volatility (number of standard deviations) and sample starting point (random number generator seed). After identifying optimal parameter values on “backtest” data, the simulator runs the optimal strategy variant on a second set of randomly generated returns to show the effect of backtest overfitting. Using this simulator, they conclude that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)