Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
1st ETF 2nd ETF 3rd ETF
Filter Research

Investing Research Articles

3610 Research Articles

Exploit VIX Percentile Threshold Rule Out-of-Sample?

Is the ability of the VIX percentile threshold rule described in “Using VIX and Investor Sentiment to Explain Stock Market Returns” to explain future stock market excess return in-sample readily exploitable out-of-sample? To investigate, we test a strategy (VIX Percentile Strategy) that each month holds SPDR S&P 500 ETF Trust (SPY) or 3-month U.S. Treasury… Keep Reading

Using VIX and Investor Sentiment to Explain Stock Market Returns

Do stock market return volatility (as a measure of risk) and aggregate investor sentiment (as a measure of risk tolerance) work well jointly to explain stock market returns? In their June 2023 paper entitled “Time-varying Equity Premia with a High-VIX Threshold and Sentiment”, Naresh Bansal and Chris Stivers investigate the in-sample power an optimal CBOE… Keep Reading

Weekly Summary of Research Findings: 7/3/23 – 7/7/23

Below is a weekly summary of our research findings for 7/3/23 through 7/7/23. These summaries give you a quick snapshot of our content the past week so that you can quickly decide what’s relevant to your investing needs. Subscribers: To receive these weekly digests via email, click here to sign up for our mailing list.

Impact of AI on Stock Valuations

How do recent advances in Generative Artificial Intelligence (AI), as epitomized by ChatGPT, impact firm valuations? In their May 2023 paper entitled “Generative AI and Firm Values”, Andrea Eisfeldt, Gregor Schubert and Miao Ben Zhang quantify workforce exposures to AI for publicly traded U.S. companies and translate those exposures into firm valuation effects. Specifically, they:… Keep Reading

Best Stock Return Horizon for Machine Learning Models?

Researchers applying machine learning to predict stock returns typically train their models on next-month returns, implicitly generating high turnover that negates gross outperformance. Does training such models on longer-term returns (with lower implicit turnovers) work better? In their June 2023 paper entitled “The Term Structure of Machine Learning Alpha”, David Blitz, Matthias Hanauer, Tobias Hoogteijling… Keep Reading

Why Did SACEVS Allocations Just Change So Much?

Subscribers asked why the Simple Asset Class ETF Value Strategy (SACEVS) signaled an apparently dramatic change in allocations at the end of June. SACEVS seeks a monthly tactical edge from timing three risk premiums associated with U.S. Treasury notes, corporate bonds and stocks: Term – monthly difference between the 10-year Constant Maturity U.S. Treasury note (T-note) yield… Keep Reading

Weekly Summary of Research Findings: 6/26/23 – 6/30/23

Below is a weekly summary of our research findings for 6/26/23 through 6/30/23. These summaries give you a quick snapshot of our content the past week so that you can quickly decide what’s relevant to your investing needs. Subscribers: To receive these weekly digests via email, click here to sign up for our mailing list.

Performance of non-U.S. 60-40

A subscriber asked about the performance of a strategy that each month rebalances to 60% international equities and 40% international corporate bonds (both non-U.S.), and how this performance compares to that of a portfolio that each month allocates 50% to Simple Asset Class ETF Value Strategy (SACEVS) Best Value and 50% to Simple Asset Class… Keep Reading

Backwards Search for the Most Important Firm/Stock Characteristics

Instead of searching among hundreds of firm/stock characteristics to identify those that best predict stock returns, what about first finding the stocks with the highest and lowest past returns and then examining the characteristics of those stocks? In his June 2023 paper entitled “Essence of the Cross Section”, Sina Seyfi identifies the strongest determinants of… Keep Reading

When AIs Generate Their Own Training Data

What happens as more and more web-scraped training data for Large Language Models (LLM), such as ChatGPT, derives from outputs of predecessor LLMs? In their May 2023 paper entitled “The Curse of Recursion: Training on Generated Data Makes Models Forget”, Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot and Ross Anderson investigate changes… Keep Reading