TICKETED SESSIONS | Advanced Methodology and Statistics Seminars

54th Annual Convention 2020 |
TICKETED SESSIONS
AMASS
Open All      Hide All

AMASS 1: Thursday, November 19 | 8:30 a.m. - 12:30 p.m.

Encore AMASS back by popular demand from 2019

Open Science Practices for Clinical Researchers: What You Need to Know and How to Get Started

Jessica Schleider, Ph.D., Stony Brook University

Michael Mullarkey, M.A., University of Texas at Austin

Participants earn 4 continuing education credits

Basic level of familiarity with the material

Primary Topic: Research Methods and Statistics

Key Words: Research Methods, Statistics, Professional Development

Clinical psychology is undergoing a revolution where hypotheses, data, materials, and papers are shared more openly than ever before, improving the credibility, accessibility, and transparency of the science we produce. Additionally, an increasing list of top-tier outlets for clinical trials now require (e.g., Journal of Consulting and Clinical Psychology, Archives of General Psychiatry/JAMA Psychiatry) or strongly encourage (e.g., Clinical Psychological Science) primary hypotheses to be preregistered in order to be considered for publication.

Secondary analyses are also being subjected to ever-increasing scrutiny, with credibility of research findings becoming an integral part of the review process. However, clinical psychology has lagged behind other areas in adopting credibility-enhancing research practices. This may be at least partially because adopting such practices are often framed as a communal good, but a personal sacrifice of time and effort. The landscape is evolving such that open science practices are no longer optional and policies at leading clinical journals suggest that this will only increase over the near term (e.g., Davila, 2019; https://www.apa.org/pubs/journals/features/ccp-ccp0000380.pdf). This AMASS will teach easy-to-adopt strategies for enhancing the transparency, accessibility, and credibility of your research-and ways in which these practices actually save both personal time and effort. We will highlight: (a) using preregistration tools to boost odds of publication acceptance, regardless of your study results; (b) tools for staying even more up to date in your field; (c) earning credit, and disseminating your work, earlier in the paper-writing process; (d) creating easy-to-reproduce analyses that meet current publication standards for data transparency. This session will include hands-on practice with free, credibility-increasing tools such as preprint servers, open data repositories, open source analysis tools (R & JAMOVI), and the Open Science Framework. This AMASS will also focus on immediate translation of at least one open science practice into each participant's workflow by the following day, no matter the type of research you conduct-from work on basic mechanisms of psychopathology to clinical trials to dissemination and implementation science.

At the end of this session, the learner will be able to:

  • Learn how and why various credibility-enhancing practices can support and strengthen your (and your lab's) research.
  • Establish a quicker ideas-to-paper pipeline (using preprint servers to disseminate research earlier).
  • Download and apply at least one tool (including a point and click interface) that helps ensure your analyses are easy for others to reproduce.
  • Explain how preregistration and registered reports can facilitate publication regardless of results.
  • Discover at least one way you can apply open science practices in your research starting the next day, regardless of your research area within clinical psychology.
Recommended Readings:

JAMOVI User Manual to Create Reproducible R Code Using a Point and Click Interface: https://www.jamovi.org/user-manual.html

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology's renaissance. Annual Review of Psychology, 69(1), 511-534.

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences of the United States of America, 115(11), 2600-2606.

Srivastava, S. (2018). Sound Inference in Complicated Research: A Multi-Strategy Approach. https://doi.org/10.31234/osf.io/bwr48

Tackett, J. L., Brandes, C. M., King, K. M., & Markon, K. E. (2019). Psychology's replication crisis and clinical psychological science. Annual Review of Clinical Psychology, 15, 579-604.

AMASS 2: Thursday, November 19 | 1:00 p.m. - 5:00 p.m.

Analyzing Longitudinal Data Collected During the Coronavirus Pandemic

Vivian C. Wong, Ph.D., University of Virginia

Participants earn 4 continuing education credits

Primary Topic: Research Methods and Statistics

Keywords: Statistics, Causal Inference, Evaluation, Longitudinal, Methods

Basic to moderate level of familiarity with the material

The COVID-19 global pandemic has had a profound effect on the lives of millions, including those who are participants or potential participants in our longitudinal research studies. The occurrence of this kind of event not only affects participant recruitment and data collection but also the analysis of data collected before, during, and after the pandemic. This AMASS will cover a set of research designs and statistical techniques (i.e., quasi-experimental methods) that are designed for testing longitudinal and causal hypotheses under these conditions. Intended for researchers who are interested in and/or conducting randomized control trials or quasi-experimental longitudinal studies, this AMASS will also address internal and external validity concerns with implementing evaluation studies during the pandemic period, and provide researchers with a framework for making decisions about planning and implementing their studies. Basic knowledge of and experience with longitudinal models (e.g., repeated-measures ANOVA, multilevel models, generalized estimating equations) is beneficial but not necessary.

At the end of this session, the learner will be able to:

  • Describe quasi-experimental designs for intervention evaluation (i.e., regression-discontinuity, matching, and time series approaches).
  • Evaluate the appropriateness of these quasi-experimental designs for a range of data collection scenarios and hypotheses.
  • Provide a framework for considering study implementation decisions during the pandemic period.
Recommended Readings:

Kim, Y., & Steiner, P. (2016). Quasi-experimental designs for causal inference. Educational Psychologist, 51, 395-405.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

West, S. G., Cham, H., Thoemmes, F., Renneberg, B., Schulze, J., & Weiler, M. (2014). Propensity scores as a basis for equating groups: Basic principles and application in clinical treatment outcome research. Journal of Consulting and Clinical Psychology, 82(5), 906.

 

 

Association for Behavioral and Cognitive Therapies
305 7th Avenue, 16th Fl., New York, NY 10001 | Phone (212) 647-1890 | Fax: (212) 647-1865
Copyright 2003 - 2020 ABCT. All rights reserved. Terms of Use