portrait

The Learning Systems Design and Development Competencies (Artifact Description):


Formative and Summative Evaluations

Artifacts (Click on the links below to download):

Formative Evaluation Plan
SMART Board Training Videos Summative Evaluation Plan


Artifact Description:

The Formative and Summative Evaluation Course introduced me to the importance of evaluating learning systems. It contained two major projects: a Formative Evaluation and a Summative Evaluation Plan. Both were group projects. We evaluated SMART Board video learning modules. These were developed by an educational training organization that services schools in a MO area school district. One of the group members was an employee of the organization and served as our subject matter expert.


We created and then implemented a formative evaluation plan to reveal how the training videos were working and in what areas they should and could be improved. We used a variety of methods to collect data. An instructional design expert reviewed the training videos. We also surveyed the actual users of the learning system. The feedback from both of these reviews provided the data needed to make recommendations for improvements.


Next, we created a summative (effectiveness) evaluation plan. According to Reeves, “The overall purpose of effectiveness evaluation is to determine whether the interactive learning system accomplishes its objectives within the immediate or short-term context of its implementation” (Reeves & Hedberg, 2003, p. 61). We developed a plan that would not only test effectiveness, but also assist the creators of the videos in decisions regarding future development of similar training. We planned several measurement tools to be used to carry out the summative evaluation in the form of an initial review, pre and post assessments, a follow-up survey, supervisor survey, impact assessment, and a general marketing survey. Each member of the team took part in the planning of the tools.


These were the most collaborative projects I experienced in my time at the University. We were each assigned a section of the evaluation to begin. We worked through google docs so we could add onto what other members started. We would then meet every week to refine, finish or write sections of the report together. Some sections we created entirely as a group. The sessions were sometimes long and intensive. But resulted in two very strong reports that received perfect scores. We learned not only how to conduct an evaluation, but grew in our teamwork and collaboration skills. Some of the sections I was assigned and a contributor to as an individual or partner with one other team member were:

  • Questions (formative and summative)
  • Purposes (formative and summative)
  • Audience (formative and summative)
  • Methods (formative)
  • User survey (formative – helped member assigned to it refine it)
  • Introduction (summative)
  • Background (summative)
  • General Marketing Survey (summative)
  • Appendix A: Formative Evaluation Results and Recommendations (summative)

I also functioned as a co-leader, keeping the team focused on the goals for each session and making sure meetings were scheduled so we never fell behind.


Reflection:

As mentioned earlier, I grew in my understanding of the importance of evaluations. Taking the time to test a learning system and elicit the viewpoints of your users and other objective sources reveals opportunities for improvements. Also, over time a great learning system can lose its effectiveness due to changing of priorities of an organization or new developments in technologies, etc. A repeated summative evaluation can reveal if a learning system is still relevant. Evaluations do take time and cost money. But training that isn’t effective or is no longer needed can be costlier.





References:

Reeves, T. C., & Hedberg, J. G. (2003). Interactive Learning Systems Evaluation.Englewood Cliffs, NJ: Educational Technology Publications.



Return to Top