Survey research and design in psychology/Lectures/Power & effect sizes
Lecture 09: Power & effect sizes
Resource type: this resource contains a lecture or lecture notes. |
This is the ninth lecture for the Survey research and design in psychology unit of study.
This page is complete for 2018. |
Outline
editExplains use of, and issues involved in:
Conclusions
edit- Decide on H0 and H1 (1 or 2 tailed)
- Calculate power beforehand and adjust the design to detect a minimum effect size (ES)
- Report statistical power, statistical significance, ES, confidence interval
- Compare results with meta-analyses and/or meaningful benchmarks
- Take a balanced, critical approach, striving for objectivity and academic integrity
Readings
edit- Howitt and Cramer (2014a):
- Chapter 35: The size of effects in statistical analysis: Do my findings matter? (pp. 487-494)
- Chapter 36: Meta-analysis: Combining and exploring statistical findings from previous research (pp. 495-514)
- Chapter 38: Confidence intervals (pp. 529-539)
- Chapter 40: Statistical power: Getting the sample size right (pp. 562-586)
- Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.
Handout
edit- Lecture slides (Google Slides)
- 2018:
See also
edit- Multiple linear regression II (Previous lecture)
- Summary & conclusion (Next lecture)
- Data dredging (Wikipedia)
- Funnel plot (Wikipedia)
External links
edit- p values and statistical significance (A New View of Statistics)
- Statisticians issue warning over misuse of P values: Policy statement aims to halt missteps in the quest for certainty (2016)
- The ASA's statement on p-values: context, process, and purpose (2016)
- Power calculators
- Post-hoc statistical power calculator for multiple regression (danielsoper.com)