Can analysts, experts and gurus really give you an investing/trading edge? Should you track the advice of as many as possible? Are there ways to tell good ones from bad ones? Recent research indicates that the average “expert” has little to offer individual investors/traders. Finding exceptional advisers is no easier than identifying outperforming stocks. Indiscriminately seeking the output of as many experts as possible is a waste of time. Learning what makes a good expert accurate is worthwhile.
Are plans to use nuclear power to provide electricity for proliferating data centers driving attractive performance for uranium exchange-traded-funds (ETF)? To investigate, we consider four such ETFs, all currently available:
Global X Uranium ETF (URA) – picks stocks of global companies involved in the uranium industry.
Sprott Uranium Miners ETF (URNM) – picks stocks of firms devoting at least 50% of assets to mining of uranium, holding physical uranium, owning uranium royalties or engaging in other activities that support uranium mining.
Sprott Junior Uranium Miners ETF (URNJ) – picks stocks of small firms devoting at least 50% of assets to mining of uranium, holding physical uranium, owning uranium royalties or engaging in other activities that support uranium mining.
Which is more important, knowing what direction to trade or knowing when to enter a trade? In his February 2026 paper entitled “Who Profits from Prediction Markets? Execution, not Information”, Joshua Della Vedova decomposes prediction market trade returns into:
Directional component – whether the trader predicted the winning outcome.
Execution component – entry price relative to final value.
Prediction markets enable skill measurements of both based on binary outcomes and no benchmarks. He considers five types of traders based on wallet activity/size/volume:
Bot (73,935): >50 trades per day or >1,000 total trades.
Sophisticated (64,913): >$10,000 volume, diversified across markets and >30 days of active trading.
Active Retail (1,305,716): 10 to 1,000 trades.
Casual (421,983): 2 to 9 trades.
One-shot (114,861): exactly one trade.
Using data for 222 million completed trades on Polymarket during November 2022 through part of February 2026, he finds that:Keep Reading
Each year in December, Barron’s publishes its list of the best 10 stocks for the next year. Do these picks on average beat the market? To investigate, we scrape the web to find these lists for years 2011 through 2026, calculate the associated calendar year total return for each stock and calculate the average return for the 10 stocks for each year. We use SPDR S&P 500 ETF Trust (SPY) as a benchmark for these averages. We source most stock prices from Yahoo!Finance, but also use Historical Stock Price.com for a few stocks no longer tracked there. Using year-end dividend-adjusted stock prices for the specified stocks-years during 2010 through 2025, we find that:Keep Reading
Having been trained by humans on human information, do Large Language Models (LLM) behave like human investors? In their January 2026 paper entitled “Artificially Biased Intelligence: Does AI Think Like a Human Investor?”, Javad Keshavarz, Cayman Seagraves and Stace Sirmans investigate whether 48 widely used LLMs exhibit any of 11 known cognitive biases in financial decision-making. They speculate that LLMs acquire biases via human-authored training data, statistical learning and responses that reward perceived helpfulness over logical consistency. Specifically, they test whether:
Any biases vary across LLMs with different levels of intelligence.
Users can intervene to suppress any biases in real-time LLM use.
Their prompt-pair methodology ensures that findings are causal rather than just correlational. Using 25 prompt-pairs per each of 11 biases across 48 LLMs, they find that:Keep Reading
A Securities Information Processor (SIP) aggregates quotes and trades from all U.S. stock exchanges to feed the NYSE Trade and Quote (TAQ) database, used in much finance research to (for example) estimate effective bid-ask spreads and associated trading frictions. Is this database trustworthy? In their December 2025 paper entitled “Latency and the Look-Ahead Bias in Trade and Quote Data”, Robert Battalio, Craig Holden, Matthew Pierson, John Shim and Jun Wu investigate the reliability of TAQ data, with focus on the arrival times of data with different latencies (delays) as compared to the assuredly ordered NYSE Arca Direct Feed Data. Using timestamped NYSE Daily TAQ data and NYSE Arca Direct Feed Data for the month of June 2019, they find that:Keep Reading
Most researchers use classical statistical testing, with a t-statistic of 2.0 as the significance threshold for accepting an hypothesis. However, this threshold is valid only if the associated p-value derives from a single test. There are hundreds of published factor tests and an unknown number of unpublished tests. How far should researchers raise the significance threshold to account for multiple hypothesis testing? In their December 2025 paper entitled “What Threshold Should be Applied to Tests of Factor Models?”, Campbell Harvey, Alessio Sancetta and Yuqian Zhao address this issue by:
Clarifying applicable statistical methods, including how to measure the probability that the null hypothesis is true and insight on the False Discovery Rate (FDR), without knowing the number of tests.
Reconciling existing results in the literature.
Providing guidance on the threshold for deciding statistical significance.
They also discuss the plausibility of the assumptions embedded in their approach. Based on mathematical analysis in the context of financial research, they find that:Keep Reading
Prior research suggests that machine learning factor models of the cross section of stock returns greatly enhance portfolio performance by: (1) expanding the dataset to include more variables; and, (2) allowing more complex (non-linear) variable interactions. Does this finding hold up in a realistic portfolio management scenario? In their November 2025 paper entitled “What Drives the Performance of Machine Learning Factor Strategies?”, Mikheil Esakia and Felix Goltz decompose performance contributions from these two enhancements in scenarios ranging from ideal to realistic. The ideal scenario, found in much machine learning research, ignores portfolio management constraints. The realistic scenario excludes microcaps, removes look-ahead bias for yet-to-be-published factors and accounts for trading frictions. They further look at exclusion of shorting. They estimate trading frictions as half the monthly effective bid-ask spread (daily average of closing quoted spreads). Using daily and monthly data for publicly listed U.S. common stocks and monthly data for 94 firm-level characteristics as available during June 1963 and through December 2021, they find that:Keep Reading
How should researchers apply and restrict artificial intelligence (AI) in research? In the December 2025 revision of their editorial entitled “The Use of AI in Academic Research”, Gordon Graham and Jennifer Tucker share experiences as accounting journal editors in dealing with this question. They review the meaning and capabilities of AI. They address the extent to which AI can perform the tasks involved in production of academic research, including pros, cons and unintended consequences. Based on their experiences, they conclude that:Keep Reading
Become a CXO Member
Gain access to hundreds of premium investing research articles and CXO's trading strategies