In his 2007 book Evidence-Based Technical Analysis: Applying the Scientific Method and Statistical Inference to Trading Signals, David Aronson opens with two contentions: (1) “much of the wisdom comprising the popular version of TA does not qualify as legitimate knowledge;” and, (2) “TA must evolve into a rigorous observational science if it is to deliver on its claims and remain relevant.” Taken in parts, this book offers sound methods for analysis. Taken as an integrating whole, it offers insightful context for evaluating a broad range of financial analyses/claims presented by others. Here is a chapter-by-chapter review of some of the insights in this book:
Chapter 1 – Objective Rules and Their Evaluation
Chapter 1 introduces a framework for testing technical trading rules.
Key points from this chapter are:
- Rigorous testing of a rule requires that it be objective, implementable as a computer program (deterministic logic) that generates unambiguous long, short or neutral positions.
- A rule is good only if it beats a reasonable benchmark with a statistically significant margin of victory.
- Detrending the test data set (for example, daily returns for the S&P 500 index) is a consistent approach to benchmarking.
- Accurate historical testing requires: (1) avoiding look-ahead bias (“leakage of future information” into the analysis); and, (2) accounting for trading costs.
In summary, a rigorously logical method is essential to transform TA from subjective opinion to objective knowledge.
Chapter 2 – The Illusory Validity of Subjective Technical Analysis
Chapter 2 investigates how biases in our thinking processes, especially with respect to complex and uncertain information, undermines the validity of subjective technical analysis.
Key points from this chapter are:
- Our brains are so strongly inclined to find patterns in nature, perhaps as evolutionary compensation for limited processing power, that we often see patterns where none really exist. This tendency toward spurious correlations, evident in subjective chart analysis, is maladapted to modern financial markets.
- Erroneous knowledge (superstition) is resilient due to biases in our thinking processes such as:
- Overconfidence;
- Optimism;
- Confirmation (discounting contradictory data);
- Self-attribution (smart when right and unlucky when wrong); and,
- Hindsight (overstating past successes and understating past failures).
- Good stories well told can make people misweight or ignore facts.
- People are not naturally rigorous logicians and statisticians. A need to simplify complexity and cope with uncertainty makes us prone to seeing and accepting unsound correlations. We tend to overweight vivid examples, recent data and inferences from small samples.
In summary, the scientific method is a reliable path to validity, mitigating the misleading effects of our cognitive biases.
Chapter 3 – The Scientific Method and Technical Analysis
Chapter 3 provides a brief history and description of the scientific method and elaborates on the primary proposition of the book.
Key points from this chapter are:
- Subjective TA practitioners protect themselves from falsification with vague, ambivalent and conditional predictions. “Because they are not testable, subjective methods [of TA] are shielded from empirical challenge. This makes them worse than wrong. They are meaningless propositions devoid of information.”
- Similarly, proponents of the Efficient Markets Hypothesis (EMH) have repeatedly (after the fact) circumvented falsification by defining new risk factors when confronted with unexpected abnormal returns, thereby weakening the credibility of EMH.
- The scientific method requires skepticism, objectification and relentless testing as counterbalance to active speculation about new TA possibilities.
In summary, those who state that TA is more art than science deserve the status of astrologers, alchemists and folk healers.
Chapter 4 – Statistical Analysis
Chapter 4 provides an overview of statistical analysis as related to TA.
Key points from this chapter are:
- TA is essentially statistical inference, the extrapolation of historical data to the future.
- Even the best TA rules generate highly variable performance across data sets.
In summary, TA is predominantly empirical, and statistics is its language.
Chapter 5 – Hypothesis Tests and Confidence Intervals
Chapter 5 lays out ground rules and methods for testing TA rules.
Key points from this chapter are:
- Confirming evidence is necessary, but not sufficient, to prove the predictive power of a TA rule. Disproving the null hypothesis (that the TA rule does not produce abnormal returns) is a stronger empirical test.
- Because losing money is worse than missing opportunities, mistakenly accepting a bad TA rule is worse than mistakenly accepting the null hypothesis. The bar of confidence in new TA rules should therefore be high.
- Compute-intensive methods (bootstrap and Monte Carlo) amplify the usefulness of historical data sets.
In summary, testing whether each TA rule does not reliably work is the most efficient path to building legitimate TA knowledge.
Chapter 6 – Data Mining Bias: The Fool’s Gold of TA
Chapter 6 explores the value and risk of data mining, the back testing of many TA rules to find the best one.
Key points from this chapter are:
- Data mining may involve:
- Discovering the best value for a parameter within a rule;
- Finding the best rule within a set of rules; and,
- Building ever more complex and effective rules from simpler rules.
- Properly executed data mining can locate the best TA rule, but the back tested performance of this rule overstates future returns. Data mining discovers luck as well as validity and, by definition, a lucky streak for a specific rule is unlikely to persist.
- Poor out-of-sample performance is evidence of this data mining bias, which is a major contributor to erroneous knowledge in objective TA.
- The severity of data mining bias has several dependencies:
- The more rules back tested, the larger the data mining bias.
- The larger the sample size used in back testing, the smaller the data mining bias.
- The greater the similarity of rules back tested (the higher their back test correlations), the smaller the data mining bias.
- The greater the frequency of outliers in the back test sample, the larger the data mining bias.
- The larger the variation in back tested returns among rules considered, the smaller the data mining bias.
- Methods to mitigate the risk of data mining bias are:
- Out-of-sample testing, evaluation of a rule picked by data mining using a data set different from that used to select the rule;
- Randomization via bootstrap or Monte Carlo methods; and,
- Use of a data mining correction factor for expected returns.
In summary, data mining presents TA experts with both the opportunity to discover the best rule and the risk of overstating its future returns.
Chapter 7 – Theories of Nonrandom Price Motion
Chapter 7 surveys the theoretical support for TA from the field of behavioral finance and from the risk premium interpretation of EMH.
Key points from this chapter are:
- Support from a sound theory makes luck less likely as the explanation of success for an outperforming TA rule. Purely empirical TA rules sacrifice this advantage.
- EMH, which precludes or impedes successful TA, is reasonably vulnerable to both logical and empirical challenge. Behavioral finance exploits the logical challenges, limits of human rationality and limits of arbitrage, to offer alternative (but less sweeping) hypotheses that support successful TA.
- Cognitive biases (see Chapter 2 key points above) are important building blocks for the hypotheses of behavioral finance.
- Competing hypotheses of behavioral finance include:
- Biased processing by investors of public information;
- Biased processing by investors of their private information; and,
- Interaction of news (fundamentals) traders and momentum (trend-following) traders.
- Even efficient markets in which participants have different sensitivities to risk, leading to risk transfer opportunities (risk premiums), offer support to TA methods.
In summary, theoretical support is available to help squeeze randomness out of TA.
Chapter 8 – Case Study of Rule Data Mining for the S&P 500
Chapter 8 shows by example how to account for data mining bias in a test of 6,402 simple technical trading rules, encompassing trend-following, reversals and divergences.
Key points from this chapter are:
- It is important to avoid data snooping bias, an unquantified data mining bias imported from TA rules of other analysts who are vague or silent on the amount of data mining performed in discovering those rules.
- Rules considered include both raw time series and derived indicators for:
- On-balance volume;
- Accumulation-distribution volume;
- Money flow;
- Negative and positive volume indexes;
- Advance-decline ratio;
- Net volume ratio;
- New highs-lows ratio;
- Prices of debt; and,
- Interest rate spreads.
In summary, this is a how-to synthesis.
Chapter 9 – Case Study Results and the Future of TA
Chapter 9 reports the results of the case study and recommends a future path for TA.
Key points from this chapter are:
- None of the 6,402 rules tested on the S&P 500 index, after adjusting for data mining bias, generate statistically significant outperformance. More complex/nuanced rules, or other financial data sets, might indicate abnormal returns.
- Experiments from the last 50 years expose the inferiority of subjective prediction to even simple statistical methods, whether due to:
- Intrusions of emotion;
- Evolution-driven cognitive biases; or,
- Lack of biological data processing power.
- Like astrology and folk medicine: “Technical analysis will be marginalized to the extent that it does not modernize.”
- Data mining partnerships between TA experts and computers can exploit human inventiveness while avoiding human gullibility.
In summary, the rationalization of financial analysis is still in its infancy.