A reader asked: “What are your thoughts on Exhibit 1 (a stunningly accurate 10-year forecast from December 31, 1999) of Jeremy Grantham’s January 2010 Quarterly Report and its implications for Jeremy Grantham’s forecasting? Grantham is rather proud of it, and I am certainly impressed! Has Grantham published other 10-Year forecasts to compare with this one? My sense is that GMO constructed a 10-Year forecast every year, so there should be other forecasts. Do you know anything about the forecasting methodology? GMO appears to use regressions to compute baselines (for price?), compare current actuals to baselines and then forecast the difference to disappear as actuals revert to means.”
The exhibit (see the following table, extracted from Jeremy Grantham’s January 2010 report) compares GMO’s 12/31/99 10-year forecast of annualized real (inflation-adjusted) returns for 11 asset classes to the actual annualized real returns of these asset classes during 2000-2009. The results look impressive. GMO apparently used to generate 10-year forecasts of annualized asset class returns, and now they do 7-year forecasts. Given the frequency of Jeremy Grantham’s commentaries, the forecast update frequency is perhaps quarterly.
What happens if we relate forecasted annualized real returns (rather than rankings) to realized values?
The following scatter plot relates actual 10-year annualized real returns for the 11 asset classes to GMO-forecasted values. The Pearson correlation for these two series is 0.94, and the R-squared statistic is 0.88 (the forecast explains 88% of the variation in returns across classes). The GMO forecasts tend to be high by less than 1%.
Some considerations that might make these results seem less than stunning are:
- Long-term asset class returns are generally more predictable than short-term returns due to averaging across different economic/market conditions (sources of shorter-term volatility). Long-run real returns (factoring out inflation-related volatility) may be more predictable than raw returns.
- The benchmark for stunningness should perhaps be average accuracy of comparable forecasts from other professional investment advisors or from simple algorithms, rather than the performance of random (completely uninformed) asset ranking.
- The result is sampled with hindsight for promotional purposes. Information to measure the accuracy of GMO’s long-term forecasts on average is not available on GMO’s web site. Is the featured forecast accuracy an outlier, or reasonably representative of the distribution of forecast accuracies?
Your ideas about how GMO constructs long-term forecasts (reversion to long-term trends) is likely on the right track. Regressions to define long-term trends are probably on returns/yields (relatively stationary, or mean-reverting, series) and not prices.
In the absence of shorter-term forecasts for return means and volatilities of the 11 asset classes, building an investment strategy around these long-term forecasts seems problematic. A strategy of allocating to asset classes with the highest forecasted 10-year real returns and rebalancing every decade would require many decades of data for reliable testing. Some (most?) of the 11 classes were very volatile over the decade. More frequent rebalancing to exploit the volatility would require shorter-term forecasts.
Note that: “Evergreen Asset Allocation Fund invests all of its assets in Asset Allocation Trust, which is advised by GMO.” This fund currently has an “Average” rating from Morningstar. There may be better funds to gauge the exploitability of GMO’s research.