Value Investing Strategy (Strategy Overview)
Momentum Investing Strategy (Strategy Overview)
Science as Done by Humans
December 6, 2021 • Posted in Big Ideas
Do the choices researchers make in modeling, sample grooming and programming to test hypotheses materially affect their findings? In their November 2021 paper entitled “Non-Standard Errors”, 164 research teams and 34 peer reviewers representative of the academic empirical finance community investigate this source of uncertainty (non-standard error, as contrasted to purely statistical standard error). Specifically, they explore the following aspects of non-standard errors in financial research:
- How large are they compared to standard errors?
- Does research team quality (prior publications), research design quality (reproducibility) or paper quality (peer evaluation score) explain them?
- Does peer review feedback reduce them?
- Do researchers understand their magnitude?
To conduct the investigation, they pose six hypotheses that involve devising a metric and computing an average annual percentage change to quantify trends in: (1) market efficiency; (2) realized bid-ask spread: (3) share of client volume relative to total volume; (4) realized spread on client orders; (5) share of client orders that are market orders; and, (6) gross client trading revenue. The common sample for testing these hypotheses is a set of 720 million EuroStoxx 50 index futures trade records spanning 17 years. Each of 164 research teams studies each hypothesis and writes a brief paper, and peer reviewers evaluate and provide feedback to research teams on these papers. They then quantify the dispersion of findings for each hypothesis and further relate deviation of individual study finding from the average finding to team quality, research design quality and paper quality. Using results for all 984 studies, they find that: (more…)
Please log in or subscribe to continue reading...
Gain access to hundreds of premium articles, our momentum strategy, full RSS feeds, and more! Learn more