Wikipedia has more about this subject: Pittsburgh Sleep Quality Index |
The Open Teaching of Psychological Science (OToPs) template is a shell that we use for building new Wikiversity instrument pages on Wikiversity.
Lead section
editThe Pittsburgh sleep quality index (PSQI) is a self-report questionnaire that was developed by Daniel Buysse (M.D.), Timothy Monk (Ph.D.), Charles Reynolds (M.D.), Susan Berman, and David Kupfer (M.D.) to provide a valid and reliable measure of sleep quality. [1] The creators of the PSQI intended for the measure to be able to discriminate "good" and "bad" sleepers in a way that is easy for clinicians and researchers to use, and to provide a brief assessment of potential sleep disturbances that may affect sleep quality. [1] This 10 items assessment measures sleep quality over the past month and potential sleep disturbances such as having a partner in the room or loud snoring. [1] The PSQI is a multiple choice test that takes about 10-15 minutes to administer, and can be used in clinical, research, and every day settings. Its strong psychometric properties has allowed researchers and clinicians to accurate gauge the quality of sleep in their patients.
Psychometrics
editSteps for evaluating reliability and validity
editClick here for instructions
|
---|
|
Instrument rubric table: Reliability
editNote: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
Click here for instrument reliability table
| ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ReliabilityeditNot all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed. Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.
|
Instrument rubric table: Validity
editClick here for instrument validity table
| |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ValidityeditValidity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.
|
Development and history
editClick here for instructions for development and history
|
---|
|
Impact
edit- What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
- What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?
Use in other populations
edit- How widely has it been used? Has it been translated into different languages? Which languages?
Scoring instructions and syntax
editWe have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.
Hand scoring and general instructions
editClick here for hand scoring and general administration instructions
|
---|
Scoring and interpretationeditConsisting of 19 items, the PSQI measures several different aspects of sleep, offering seven component scores and one composite score. The component scores consist of subjective sleep quality, sleep latency (i.e., how long it takes to fall asleep), sleep duration, habitual sleep efficiency (i.e., the percentage of time in bed that one is asleep), sleep disturbances, use of sleeping medication, and daytime dysfunction. Each item is weighted on a 0–3 interval scale. The global PSQI score is then calculated by totaling the seven component scores, providing an overall score ranging from 0 to 21, where lower scores denote a healthier sleep quality. Traditionally, the items from the PSQI have been summed to create a total score to measure overall sleep quality. Statistical analyses also support looking at three factors, which include sleep efficiency (using sleep duration and sleep efficiency variables), perceived sleep quality (using subjective sleep quality, sleep latency, and sleep medication variables), and daily disturbances (using sleep disturbances and daytime dysfunctions variables).[19] Component 1 - subjective sleep quality: 9 Component 2 - sleep latency: 2, 5a For item 2, the scoring is: (0) less than or equal to 15 mins; (1) 16-30 mins; (2) 31-60 mins; (3) larger than 60 mins. Component 3 - sleep duration: 4 For item 4, the scoring is: (0) larger than 7 hrs; (1) 6-7 hrs; (2) 5-6 hrs; (3) less than 5 hrs. Component 4 - sleep efficiency: 1, 3, 4 Sleep efficiency = (# hours slept/# hours in bed) X 100% # hours slept—question 4 # hours in bed—calculated from responses to questions 1 and 3 (0) larger than 85%; (1) 75-84%; (2) 65-74%; (3) less than 65%. Component 5 - sleep disturbance: 5b-5j Component 6 - use of sleep medication: 6 Component 7 - daytime dysfunction: 7, 8 Global PSQI score: sum of 7 component scores |
CSV shell for sharing
editClick here for CSV shell
|
---|
|
Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.
Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.
When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).
R/SPSS/SAS syntax
editClick here for R code
|
---|
R code goes here |
Click here for SPSS code
|
---|
SPSS code goes here |
Click here for SAS code
|
---|
SAS code goes here |
See also
editHere, it would be good to link to any related articles on Wikipedia. For instance:
External links
editExample page
editOToPS usage history
editDetails | |
---|---|
Date Added
(when was measure added to OTOPS Survey? |
<Date> |
Date Deleted
(when was measure dropped from OTOPS survey?) |
<active/deleted>, <date> |
Qualtrics scoring | Variable name of internally scored variable:
XXX Notes on internal scoring: - Is it piped? - Is it POMP-ed? - Any transformations needed to make it comparable to published benchmarks? |
Content expert | Name: Jane Doe, Ph.D.
Institution/Country: University of Wikiversity / Canada Email: Type email out Contacted: Y/N Following page: Y/N |
References
editClick here for references
|
---|
|