Talk:Evidence-based assessment/Technical Manuals

Question and Answer: What is a Technical Manual in the Cloud?

edit
  • I’m not sure what a “peer-reviewed manual in the cloud” would be. Can you please describe what you envision or link to the GBI version?

EY: It means what it sounds like – a technical manual, but with two innovations:

  • The version of record is built and housed online, with all the advantages that offers (low cost, easy distribution, easy to update)
  • The manual goes through peer review at least once, which is not the case with manuals to date. (Imagine Achenbach or Pearson sending their manuals through peer review! Or Depue sending his typed distribution notes and scoring instructions…. The first two won’t, because it would be releasing trade secrets; and the “homebrew” manuals don’t, because they go more into the weeds than is typical in peer reviewed articles, with the exception of technical methods journals).
  • It sounds like this is essentially a literature review, is that correct?

EY: A literature review would be a part of it. At a minimum, a “start” version of these would contain a bibliography and scoring instructions, plus links to distribution versions of the measure. Next step up would be an annotated bibliography. Fully articulated would look a lot like the ASEBA or Conners manuals (those are the two that I mashed up to build the comprehensive outline – which, not surprisingly, most people initially find overwhelming).

  • Would you say more about your plan for peer review? If I understand the project correctly, it’s not necessarily the type of thing that would normally be published in a peer reviewed journal article.

EY: Gist: The “competing product” is a published manual, not a review article per se.

Several parts to this:

  • Correct, these are not usually published in top tier peer reviewed journals, partly due to space constraints (which are a legacy of paper printing and distribution). There are exceptions, most often framed as “monographs.” The most frequent examples were the Sage “Greenies” for methods “extend play, dance mix” versions; but Child Development and other top journals used to use them. Depue’s 1981 GBI paper was a huge “monograph” type article in J Abnormal Psych, and Meehl had several key articles published as monographs in Psychological Reports (a pay to publish venue which he used because he thought that the idea and format were important, even if other outlets wouldn’t take it).
  • The plan for peer review is sending the version of record to a (functioning) WikiJournal for peer review. (ß I know all three of us are rolling our eyes as we read that… they are making progress, and I have two other tactics I am going to use this summer to get things moving, too).
  • Functionally, I think that these will lead to a better product. Reviewers will suggest points of clarification and elaboration.
    • Technical reviewers will be particularly helpful at pointing out caveats, ideas for future work
    • Afficionados of the instrument (e.g., Dan Kline with the GBI; Hantouche with the HCL) will add to the strengths and the research bibliography
    • Authors of competing products (e.g., Cecil Reynolds, Marika Kovacs) will sell against it, which would add to “NPOV” (neutral point of view) and balance (ß the key here being for the Action Editor to make sure that reviewers with a COI can’t kill the submission, only shape it to be more balanced)
    • Clinical reviewers (ß not usually sought out) will pull for more practical details about use cases and improve the utility
    • Lived experience reviewers (ß almost never sought out traditionally) would pull for helpful framing of the reporting and feedback
  • Strategically, the goal is to build a model where the “Best of the Free” tools in the HGAPS Assessment Center and HGAPS Toolkit not only are not at a disadvantage compared to the commercial incumbents, but instead they leapfrog them.

- Feature

- Commercial

- Manual in Cloud

- Manual

- Paper, expensive

- Online, free

- Peer reviewed

- No*

- Yes

- Copies of measure

- Yes, printed (or sold separately)

- Links

- Copies in other languages

- No

- Linked

- Clinical Scoring

- In tech manual, scoring software sold separately

- In free manual

- Linked to R (and other) code

- May be free version in Assessment Center

- Research scoring

- Not shared

- Free R in OSF.io, distributed as links in Wiki

* Achenbach and Reynolds both have a model where they elaborated parts of the validity studies in the tech manual as peer reviewed publications, mostly in JAACAP and JCCAP. Glutting and Watkins did that with some of the clinical validation studies with Wechslers when they consulted with Psych Corps. So there’s a blueprint for putting stuff in the tech manual and reworking it as a standalone peer-reviewed article. With the stuff being CC BY, there’s no intellectual property obstacle, either.


It will take a bit of time for us to develop the scaffolding for people to “get it” and be able to independently create these, but I don’t see any insurmountable barriers, and it would be a fundamentally better product. Connecting it to the AC and Translations projects has the potential to lead to much more rapid adoption.


Thank you for the questions, Anna, and for looping me into the dialog.


This pushed me to write down more of what has been taking form in my mind’s eye, and it will help a lot with the path forward.


We have several of these in motion – the GBI is the oldest, but the HCL, PHQ-9, and SCAARED are all moving in this direction. When we have the scaffolding in place, I will pick 5-6 of these for target projects for PSYC 525 so that we’ll have students working with the measure, gathering the code, and pair them with HGAPS teams working on gathering the translations, programming into the AC, etc. (If we really have the scaffolding, I would consider turning 525 into a 60 seat class with a TA to increase the bandwidth).

Some of the technical aspects could become honors theses (like Caroline Vincent’s and Briana Augustin’s this year), and with more scaffolding, independent study projects.

Also could be Masters theses, or maybe a dissertation (particularly in the “staple 3 papers” variation).

This also does not have to be a UNC thing – Kalil Manara is an exemplar of how students in other parts of the world are learning about HGAPS and engaging with it, designing doctoral projects tightly linked to it.


My two aces for getting the Wiki Journals moving:

  1. Call in favors to get reviewers at the WJ Med. This is what I am going to do in the next couple weeks for the stalled articles. And now I have the bandwidth to nag and beg, which I didn’t earlier in 2022.
    1. Pros: The journal exists, has enough papers in it to qualify for indexing in PubMed (and will be reviewed again in a year for that), already is indexed in various other databases.
    2. Cons: Well known to us already. Slooooow, and sometimes amateurish internal processes.
  2. Transfer the papers or send them to the new WikiJournal, and hammer them through there.
    1. Pros: We have more control over the speed and outcome. It starts building the critical mass of articles to get independently indexed. Because the initial submissions will be invited (formally or informally) the quality will start off high (which will prevent the problem of uneven quality that tanked WJ Med in the last round of PubMed consideration).
    2. Cons: Not indexed yet. That’s the main one. It’s offset by the fact that we are building the things already, so we have a sunk cost that turns into an efficiency as we “upcycle” into a peer-reviewed version. The technical barrier of having them on Wiki is addressed by having HGAPS building them – so we have a cadre of editors and a pipeline for growing more.

The lack of indexing is also less of a hit for this specific type of product, because (a) manuals aren’t usually sent to journals (so there isn’t an opportunity cost for good authors like you! 😊) and (b) a lot of the work is getting done by people at a career stage where having any documentation of research experience and contribution is a resume builder, and (c) it is democratizing access to the generation of the research, as well as the implementation of it. I cannot understate how powerful the response is to this aspect in Turkey, where I got to describe it and get feedback immediately from the audience (which was entirely psychiatrists, from residents to senior professors).


When we have several of these ready, we can repackage them as a “Special Issue” or Special Section of the WikiJournal. (Just got an invitation from Frontiers to edit one of these myself: Their model is to accumulate 9+ papers – not all at the same time – and then they’ll remix them as a separately PDF with its own ISBN!).


Get 9 of these, plus a 10th that is an overview/intro article that turns this long email into a more polished article, and we are 1/3 of the way to being indexed in PubMed right there. Transfer a couple stalled articles out of WJ M and we are halfway there. I have several mindmaps I made on the plane from Munich to DC for articles that I am envisioning “twinned” submissions, with one going to a paywalled traditional journal and the other going to the WikiJournal.


I am old enough that I remember when Alan Kazdin launched Clinical Psychology: Science and Practice. He put on a master-class about how to launch a journal. He got articles from Jerome Kagan and some of the other biggest dawgs, and released Volume 1, Issue 1 when he had a critical mass. He was strategic about invitations to authors and reviewers. It got indexed quickly, and it’s path to where it is now was not linear, but it was on its way.


Bob and I brainstormed about whom to involve, and then I hit the brakes when I released that we did not have a good user experience ready for the “Cathy Lords” and “Deanna Barch-es.” The pivot in my mind is to go ahead and build more than half of volume 1 on the down low, invite some big dogs when we are 60% of the way to a complete volume or bolus for indexing, and then do a “grand opening” announcement after a year or two of the soft launch.


People I want to share a version of this email with:

Andy De Los Reyes, Mary Fristad, Cecil Reynolds, Paul Frick (ß experienced editors at traditional journals, plus measure authors and innovators)

Bruce Chorpita, David Watson, Lee Anna Clark, Roman Kotov

Guillermo, Thomas

WikiMedia Program officers

The troops in the trenches working on some of these manuals in the cloud….


😊


Okay, that was a productive hour for me. I hope it was helpful for you and for the Board. Seriously.


I am going to print this out now so that I have a copy for my plane trip to Korea.


Happy Monday!


-Eric


P.S. I need to scan Depue’s typed manual and ask him about putting it online for historical reference.


P.P.S. Version of record: This would be another long email in it’s own right if fully developed, but I think that there actually is a pretty easy way of having a stable version of record. The “print as a book” tool on Wiki already exists and automates the process. I just clicked on the “Download as PDF” button and got a 15 page PDF of the GBI manual (attached). Easy. That’s a distribution model for clinicians and researchers right there. But here’s the neat twist:

Schedule a review cycle, and have either the authors or an independent panel (ß peers!) review it. When complete, download as a PDF, add to OSF, give it a DOI, and link back as “Author reviewed version X.y, 23 May 2022”. That way there would be a documented “official” version. The WikiJournal could have a process for peer review and publication of the updated peer reviewed version. (Prior considerations about page limits don’t apply anymore in an electronic format; and IP considerations don’t apply with the licensing. Pissing off the installed base who bought the previous version, like APA just did with DSM-5TR, is not a problem since it was free to the user anyway!!)

Eyoungstrom (discusscontribs) 13:46, 23 May 2022 (UTC)Reply

Return to "Evidence-based assessment/Technical Manuals" page.