Objective research to aid investing decisions

Value Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
Cash TLT LQD SPY

Momentum Investing Strategy (Strategy Overview)

Allocations for November 2024 (Final)
1st ETF 2nd ETF 3rd ETF

Investing Expertise

Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.

GPT-4 as Financial Advisor

Can state-of-the-art artificial intelligence (AI) applications such as GPT-4, trained on the text of billions of web documents, provide sound financial advice? In their June 2023 paper entitled “Using GPT-4 for Financial Advice”, Christian Fieberg, Lars Hornuf and David Streich test the ability of GPT-4 to provide suitable portfolio allocations for four investor profiles: 30 years old with a 40-year investment horizon, with either high or low risk tolerance; and, 60 years old with a 5-year investment horizon, with either high or low risk tolerance. As benchmarks, they obtain portfolio allocations for identical investor profiles from the robo-advisor of an established U.S.-based financial advisory firm. Recommended portfolios include domestic (U.S.), non-U.S. developed and emerging markets stocks and fixed income, alternative assets (such as real estate and commodities) and cash. To quantify portfolio performance, they calculate average monthly gross return, monthly return volatility and annualized gross Sharpe ratios for all portfolios. Using GPT-4 and robo-advisor recommendations and monthly returns for recommended assets during December 2016 through May 2023 (limited by availability of data for all recommended assets), they find that:

Keep Reading

Best Stock Return Horizon for Machine Learning Models?

Researchers applying machine learning to predict stock returns typically train their models on next-month returns, implicitly generating high turnover that negates gross outperformance. Does training such models on longer-term returns (with lower implicit turnovers) work better? In their June 2023 paper entitled “The Term Structure of Machine Learning Alpha”, David Blitz, Matthias Hanauer, Tobias Hoogteijling and Clint Howard explore how a set of linear and non-linear machine learning strategies trained separately at several prediction horizons perform before and after portfolio reformation frictions. Elements of their methodology are:

  • They consider four representative machine learning models encompassing ordinary least squares, elastic net, gradient boosted regression trees and 3-layer deep neural network, plus a simple average ensemble of these four models.
  • Initially, they use the first 18 years of their sample (March 1957 to December 1974) for model training and the next 12 years (January 1975 to December 1986) for validation. Each December, they retrain with the training sample expanded by one year and the validation sample rolled forward one year.
  • Each month they rank all publicly listed U.S. stocks above the 20th percentile of NYSE market capitalizations (to avoid illiquid small stocks) between −1 and +1 based on each of 206 firm/stock characteristics, with higher rankings corresponding to higher expected returns in excess of the U.S. Treasury bill yield, separately at each of four prediction horizons (1, 3, 6 and 12 months).
  • For each prediction horizon each month, they sort stocks into tenths (deciles) from highest to lowest predicted excess return and reform value-weighted decile portfolios. They then compute next-month excess returns for all ten decile portfolios.
  • They consider a naive hedge portfolio for each prediction horizon that is long (short) the top (bottom) decile. To suppress turnover costs, they also consider an efficient portfolio reformation approach that is long (short) stocks currently in the top (bottom) decile, plus stocks selected in previous months still in the top (bottom) 50% of stocks. 

Using the data specified above during March 1957 through December 2021 and assuming constant 0.25% 1-way turnover frictions, they find that:

Keep Reading

When AIs Generate Their Own Training Data

What happens as more and more web-scraped training data for Large Language Models (LLM), such as ChatGPT, derives from outputs of predecessor LLMs? In their May 2023 paper entitled “The Curse of Recursion: Training on Generated Data Makes Models Forget”, Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot and Ross Anderson investigate changes in LLM outputs as training data becomes increasingly LLM-generated. Based on simulations of this potential trend, they find that: Keep Reading

ChatGPT News-based Forecasts of Stock Market Returns

Are the latest forms of artificial intelligence (AI) better at forecasting stock market returns than humans? In his February 2023 preliminary paper entitled “Surveying Generative AI’s Economic Expectations”, Leland Bybee summarizes results of monthly and quarterly forecasts by a large language model (ChatGPT-3.5) of U.S. stock market returns and 13 economic variables based on samples of Wall Street Journal (WSJ) news articles. He uses the S&P 500 Index as a proxy for the U.S. stock market. He asks ChatGPT to provide reasons for responses. He compares accuracy of ChatGPT forecasts to those from: (1) surveys of humans, including the Survey of Professional Forecasters, the American Association of Individual Investors (AAII) and the Duke CFO Survey; and, (2) a wide range of fundamental and economic predictors tested in past research. Using monthly samples of 300 randomly selected WSJ news articles, results of human surveys and various fundamental/economic data during 1984 through 2021, he finds that:

Keep Reading

Vanguard or Fidelity? Active or Passive?

Should investors in low-cost mutual funds consider active ones? In his April 2023 paper entitled “Vanguard and Fidelity Domestic Active Stock Funds: Both Beat their Style Mimicking Vanguard Index Funds, & Vanguard Beats by More”, Edward Tower compares returns of active Vanguard and Fidelity stock mutual funds to those of style-mimicking portfolios of Vanguard index funds. He segments active funds into three groups: U.S. diversified, sector/specialty and global/international. For U.S. diversified funds, for which samples are relatively large, he regresses monthly net returns of each active fund versus monthly net returns of Vanguard index funds to construct an index fund portfolio that duplicates the active fund return pattern (style). For sector/specialty and global/international segments, for which samples are small, he instead compares active fund net returns to those for respective benchmarks. He uses Vanguard Admiral class funds when available, and Investor class funds otherwise. He applies monthly rebalancing for all fund portfolios. Using fund descriptions and monthly net returns during January 2013 through March 2023, he finds that:

Keep Reading

Stocktwits Tweeters as Investing Experts

Are there clearly skilled and unskilled stock-picking influencers on social media platforms such as StockTwits? If so, do investor reactions to such influencers drive out the unskilled ones? In their March 2023 paper entitled “Finfluencers”, Ali Kakhbod, Seyed Kazempour. Dmitry Livdan and Norman Schuerhoff examine skillfulness, influence and survival of StockTwits tweeters who have followers. They apply four skill metrics to measure stock-picking skill levels of these influencers to identify those who are: (1) skilled (reliably good advice); (2) unskilled; and, (3) anti-skilled (reliably bad advice). They calculate future (1 to 20 days) abnormal returns for each influencer by comparing factor model-adjusted returns (alphas) of associated stock picks before and after recommendation dates. To assess skill persistence, they compare influencer skill levels in the first and second halves of the sample. Using tweet-level and follower data from StockTwits for 29,477 influencers, matched daily stock returns and daily equity factor returns during July 13, 2013 through January 1, 2017, they find that:

Keep Reading

Evaluating Investment Advisory Service Buy Recommendations

A subscriber requested evaluation of Investment Advisory Service stock picking ability based on a sample newsletter obtained in mid-April 2023. The offerors state that they follow “a sound, buy-and-hold approach to identifying well-managed, high-quality companies” that “highlights emerging and oft-overlooked stocks with excellent growth potential and reasonable valuations.” The sample newsletter, dated January 2022, includes a December 14, 2021 list of 90 stocks apparently representing a recommended portfolio. Of these 90 stocks, 31 have buy recommendations as of that date. To assess the usefulness of the buy recommendations, we calculate total return for each from the close on December 15, 2021 to the close at the end of 2022 and compare the equal-weighted average of those returns to total returns for SPDR S&P 500 ETF Trust (SPY) and (based on the service description) Vanguard U.S. Quality Factor ETF (VFQY) over the same period. For most of the stocks, we use dividend-adjusted price data from Yahoo!Finance. Two of the stocks change name/symbol after the start of the sample period (FB–>META and INS–>CCRD). For one stock with price data no longer available at Yahoo!Finance due to a post-2022 merger (IAA), we use historical data from Barchart.com. Using the specified price data during December 15, 2021 through December 30, 2022, we find that: Keep Reading

ChatGPT as Stock Sentiment Analyst

Can advanced natural language processing models such as ChatGPT extract sentiment from firm news headlines that usefully predicts associated next-day stock returns? In their April 2023 paper entitled “Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models”, Alejandro Lopez-Lira and Yuehua Tang test the ability of ChatGPT to predict next-day returns of individual stocks via analysis of relevant article and press release headlines from RavenPack, a leading provider of news sentiment data. They pre-process the headlines to ensure unique content and high relevance to a specific firm. They next instruct ChatGPT to designate whether each headline is good, bad or irrelevant for firm stock price, as follows:

“Forget all your previous instructions. Pretend you are a financial expert. You are
a financial expert with stock recommendation experience. Answer “YES” if good
news [+1], “NO” if bad news [-1} , or “UNKNOWN” if uncertain in the first line [0]. Then
elaborate with one short and concise sentence on the next line. Is this headline
good or bad for the stock price of _company_name_ in the _term_ term?”

They then compute a ChatGPT score for each stock in the news (averaging if there is more than one headline for a firm) and relate all stock scores to stock returns lagged by one day. They further compare predictive powers of ChatGPT sentiment scores to those provided by RavenPack. Using daily returns for a broad sample of U.S. common stocks and daily news headlines from RavenPack during October 2021 (post-training period for ChatGPT) through December 2022, they find that: Keep Reading

Can Expert Financial Advisors Beat the Market?

Can expert financial advisors beat the market? ChatGPT responds: Keep Reading

Survey-based Stock Market Return Forecasts

Can surveys of various expert and inexpert groups usefully predict stock market returns? In their March 2023 paper entitled “How Accurate Are Survey Forecasts on the Market?”, Songrun He, Jiaen Li and Guofu Zhou assess abilities of the following three surveys to predict S&P 500 Index returns:

For comparison, they also look at two other predictors, one based on a set of economic variables and the other based on aggregate short interest for U.S. stocks. Their benchmark forecast is a simple random walk tethered to the historical mean return. They test forecast accuracies statistically and gauge the economic value of each forecast based on out-of-sample certainty equivalence gain and Sharpe ratio for a portfolio that times the S&P 500 Index based on the forecast (versus buying and holding the index). Using data for the selected surveys, the set of economic variables, aggregate short interest for U.S. stocks and the S&P 500 Index as available (various start dates) through December 2020, they find that: Keep Reading

Login
Daily Email Updates
Filter Research
  • Research Categories (select one or more)