Back to Topic: Instructional Design > User Testing of E-Learning Courses > Formative Evaluation Overview

ADDIE ISD Model edit

The ADDIE model w:ADDIE_Model, an instructional systems design model, is the generic process traditionally used by instructional designers and training developers. The five phases—Analysis, Design, Development, Implementation, and Evaluation—represent a dynamic, flexible guideline for building effective training and performance support tools.


Formative and Summative Evaluation edit

Evaluation w:ADDIE_Model in the ADDIE model consists of two parts: formative and summative.

Formative evaluation should be conducted at regular intervals throughout the ADDIE process. Formative evaluation is like a check and balance system throughout the entire ADDIE model. User testing is a kind of formative evaluation that occurs at the end of the development cycle.

Summative evaluation is different from formative evaluation. Summative evaluation, conducted at the conclusion of instruction, measures how well the major outcomes of a course reflect its objectives. Summative evaluation is like the final grade or score of the instruction. Phonebein 17:39, 6 April 2008 (UTC) You might consider trying to find an analogy to differentiate these two types of evaluation -- in other words, a micro instructional strategy. I think it was Scriven (who coined formative and summative) who used a "cook" analogy to distinguish these two evaluation forms. See what you can come up with. This is all a bit too "clinical" and needs to be lightened up.

Formative Evaluation in the ADDIE Model edit

Knitting your ADDIE model together? Remember to weave in your formative evaluation thread. Formative evaluation is an important process that is conducted throughout the ADDIE instructional model. Formative evaluation is a way to check the status of the analysis, design, development, implementation and evaluation stages. Ongoing formative evaluation in the form of surveys, interviews, observations and records helps inform changes to instruction in all stages. The basic instructional design model of ADDIE helps structure e-learning courses, and formative evaluation needs to be conducted regularly to create instruction that is effective, efficient and appealing. Formative instruction is conducted at regular review intervals especially during the development cycle.


Planning in Formative Evaluation edit

The purpose of every formative evaluation, regardless of context or curriculum, is to validate the instructional design and strategies to ensure the instructional objectives are being met. Morrison, et al's chart below outlines some critical questions to ask in the formative evaluation planning component. In Lesson 3, you will see a user testing plan as an example of formative evaluation planning.


Design Functions Components Description
Formative Evaluation Planning Purpose, audience, issues, resources, evidence, data-gathering techniques, analysis, reporting

From: Morrison, G.R., Ross, S.M. & Kemp, J.E. (2004). Designing Effective Instruction. (4th ed.) New York: John Wiley & Sons, Inc.

Stages & Examples of Formative Evaluation edit

Dick and Carey (1991) talk about how formative evaluation differs throughout the ISD process. Dick and Carey have a three stage model that gives a good idea about formative evaluation in the development cycle. In these stages, examples of formative evaluation are used such as observations, interviews, surveys, and records.


In Stage 1, one-to-one trials, the designers test out instruction with individual learners through observation, survey, and/or interview. This is at the beginning of the development process to try out instruction and to identify the impression of the instruction on the learner. In the case of user testing in an e-Learning course, this stage can be conducted with asynchronous interveiws to collect user comments (link to Tracy's result example) and a worksheet survey (link to Christine's worksheet for user testing) creating records for analysis.


In Stage 2, small-group trials, the designers test out a preliminary or draft version (BETA version) with small groups (8-20) to observe and measure attitudes and performance to identify strenths and weaknesses of instruction. In case of user testing in an e-Learning course, this stage can be conducted with as a synchronous observation in a computer lab (insert side photo with caption "User Testing in a Computer Lab with Two way Mirrors) with a two way mirror separating the participants from the observers. Webcams (insert photo of user/tester at computer terminal with webcam as central focus; caption.) can also be set up for an asynchronous review of user performance.


In Stage 3, field trials, the designers test out a completed version of the instruction on the actual learners to assess the implementation of instruction to measure performance and attitudes. While user testing is conducted before a final version is implemented, fast prototyping development could push the user testing onto implementation in the form of a synchronous Webex observation (link to Webex demo).


Dick,W., & Carey, L. (1991). The systematic design of instruction (2nd ed.). New York: Harper-Collins.

Phonebein 17:41, 6 April 2008 (UTC) I have no clue what you want me to do now. Self-instruction has to have very clear instructions to the learner, since control of the course rests with the learner -- you need to state specifically what you want the learner to do. Also, look at what the other groups are doing in terms of sequential navigation, and perhaps steal their ideas/implementations.

Back to Topic: Instructional Design > User Testing of E-Learning Courses > Formative Evaluation Overview