JCCAP FDF/2017/Evidence based assessment

This is the landing page created at the First JCCAP Future Directions Forum to help organize information about publicly available data sets as well as some suggestions for best practices in designing and reporting research looking at these types of variables. There were four keynote addresses: Dr. Eric Youngstrom discussing future directions in assessment, Dr. Matthew Nock discussing suicidal and self injurious behavior, Dr. Mary Fristad discussing bipolar disorder, and Dr. Daniel Shaw discussing trajectories and treatment for conduct problems. Each of these was the focus for 2-3 smaller breakout discussion sessions led by content experts. There are a set of four pages that gather the ideas and resources related to these sessions.

Dr. Eric YoungstromEdit

Eric Youngstrom did the keynote, using the vignette of Lea as an example to illustrate steps and principles in applying evidence-based assessment.

The article walking through the case material in more detail is part of the PDF program for the meeting (Youngstrom, Choukas-Bradley, Calhoun, & Jensen-Doss, 2014).[1]

The ideas also are developed and updated in Youngstrom, Van Meter, Frazier, Hunsley, Prinstein, Youngstrom, & Ong (in press) Clinical Psychology: Science and Practice.

The talk and paper lay out a way of reorganizing the sequence of assessment to maximize efficiency and reduce costs. The choice of when to assess and what tool to use is guided by focusing on three different phases of the clinical encounter: a Prediction phase (focused on risk assessment, screening, and integrating the results to produce a dashboard of probable hypotheses), a Prescription phase (where assessment finalizes diagnoses and case formulation and guides treatment selection) and a Process phase (where the emphasis shifts to measuring progress, process measures such as session attendance, homework, or mediational mechanisms, as well as outcome evaluation). By optimizing the order of assessments and using the best of the free measures whenever available, the approach can yield large improvements in accuracy, and may also improve satisfaction and outcome, while adding little time or expense to the evaluation process.

The barriers to implementation include a lack of familiarity with the model, and difficulty finding the tools and the supporting information.

Recent initiatives that are addressing these barriers include:

Multiple societies contributing small grant support to build Wikipedia pages that describe the best free assessment resources for common presenting problems and diagnoses (a list is available here). These pages provide information for the general public, and they have been developed using small teams mixing students and content experts. They are working to include links to PDFs of the measure, or sometimes even online scoring tools that automate the process.

A second initiative, with support from SCCAP, has also built out a set of sister pages on Wikiversity that are intended for an audience of clinicians and trainees. They include more information about scoring, norms, psychometrics, and examples of interpretation.

A new student service organization, Helping Give Away Psychological Science (HGAPS) has been founded with the purpose of helping undergraduate and graduate students learn the editing skills, etiquette, and critical thinking needed to make successful edits on Wikipedia and Wikiversity pages. HGAPS will work in partnership with SCCAP and other professional societies to help extend the quantity and quality of information about psychology that reaches the public.

Dr. Andrew Freeman: Classification/diagnosisEdit

Andrew Freeman led this discussion group. It focused on EBA for making clinical decisions.

Themes included:

1) Implementation challenges for everyday practice. Discussed altering environment by standardizing intake procedures and automatically collecting (e.g., receptionist hands measures, electronic collection, mailing ahead of time) as aids.

2) Design considerations if evaluating or planning a study. Discussed necessity of meaningful comparison groups (case-control bad, complexity good for maximizing external validity/generalizability) or focusing on meaningful comparisons (e.g., ADHD-inattention vs ADHD-combined); diagnoses masked/blind measures (a feature of many research contexts); and inclusion better than exclusion for decision-making.

3) Future directions - Discussed integrating technology (e.g., actigraphy, social media, EMA <add link>); expanding populations/settings; finding collaborators to help with methods (e.g., programming, statistics from other disciplines). Meant to discuss (but did not) statistical tools - current regression based and Receiver Operating Characteristic (ROC); future is moving towards machine learning, trajectory mapping (e.g., use econometrics approaches in the stock market on individual data).

Dr. Susan White: TreatmentEdit

Susan White led this discussion group. It focused on EBA and treatment.

Themes included

Experimental Therapeutics <add link to NIMH page>

Trajectory Matching --

  • Extinction outbursts -- we know that this is a common
  • "Tried time out... it didn't work!"
  • When do treatment gains materialize in Coping Cat? ...during the exposure sessions

Machine Learning

Ecological Momentary Assessment -- The discussion around this led to the idea of an EMA discussion and research support page (an "EMA Party" similar to the ROC Party pages)

Dr. Matthew Lerner: NeuroscienceEdit

Matthew Lerner led this discussion group


  1. Youngstrom, Eric A.; Choukas-Bradley, Sophia; Calhoun, Casey D.; Jensen-Doss, Amanda (February 2015). "Clinical Guide to the Evidence-Based Assessment Approach to Diagnosis and Treatment". Cognitive and Behavioral Practice 22 (1): 20–35. doi:10.1016/j.cbpra.2013.12.005. http://www.sciencedirect.com/science/article/pii/S1077722913001193.