Having over a decade of experience working with The Speech Team, Inc., Kate's performed countless analyses and evaluations to improve ILT and eLearning courses. Whether through implementing new technology for delivery or making adjustments to content and assessments, the data-driven improvements were made to better benefit the next group of learners. This was done in part by aggregating pre-post test data for analysis, as well as reviewing individual course evaluations. These evaluations gave unique insight and allowed for the modification of ILT and eLearning courses to become user-centered, which enabled the creation of highly usable and accessible materials.
In the public sector for higher education, Kate has completed an extensive analysis and evaluation of core foundation courses to help identify 1) key performance indicators, 2) provide recommendations supported by learning theories, and 3) develop a plan for supporting and enhancing student learning to increase course success and retention rates. The analysis and evaluation included the aggregation of various student data, course success rate data, course evaluations, and student evaluations of teachers (SETs). Using this data, Kate created grade distribution graphs and DFW tables and charts. She then performed literature reviews to develop a report and recommendation for an alternative pedagogical approach. In addition to this, she also created interactive, digital learning resources in Articulate Storyline to support the alternative pedagogical approach.
As with most things, one size doesn't fit all, and it couldn't be more accurate for instructional design. There isn't one specific model, method, or process that will work for every project. Which is why analysis and evaluation is critical. Real data should drive the design and will ultimately lead to the best model, method, and process for the project, and more times than not, it's a combination of a few.
Below are descriptions of the mixture of models, methods, and processes used in a lot of Kate's instructional design work, especially for analysis and evaluations.
The analysis phase is the foundation for all subsequent design and development activities of a learning or training process. It is the oft overlooked, yet extremely necessary forerunner of good instructional design. The results from any analyses help diagnose problems at hand, as well as help develop a better understanding of learners’ needs, the contexts within which they operate, and the tasks that they must perform.
Following ADDIE, the first phase of the instructional design process is to determine what the deficiencies or problems are. It can be thought of as the process of identifying gaps between what should be happening and what is happening, and accounting for the causes of these gaps. In this way, it is a systematic search for identifying deficiencies between actual and desired performance and the factors that prevent desired performance as presented in the following steps:
What are the performance expectations (desired state)?
What is the current state of performance?
What are the gaps to performance (needs) and causes?
What are the solutions to bridge the gap?
The process is often an eye-opener for clients, because frequently the client has already decided that training is the solution and has already put together content to solve the problem. There is a strong tendency to see training as delivering information "telling as teaching." The needs analysis looks to uncover what type of "intervention" will produce the desired results.
A useful model used to help uncover whether training is a helpful “intervention” is the “six boxes” model. In short, this breaks down performance issues into the following:
What causes performance gaps?
Lack of clear expectations and/or effective feedback
Lack of tools and resources?
Lack of consequences or incentives?
Lack of knowledge and skill? (Training)
Lack of capacity? (Selection and Alignment)
Misaligned motives or preferences? (Attitudes)
After the needs and gaps have been identified, the next step is designing the appropriate intervention. The intervention could be training, or it could be something else entirely. Only an effective front-end analysis will determine this.
Despite the fact that evaluation is the final stage of the ADDIE methodology, it should be considered not as a conclusion of a long process, but as a starting point for the next iteration of the ADDIE cycle. Instructional Design is an iterative process, and evaluation should be carried out on a regular basis.
Formative evaluation runs parallel to the learning process and is meant to evaluate the quality of the learning materials and their reception by the learners. It’s main purpose is to monitor how well the content, learning activities, and assessment items are aligned to the learning objectives so any deficiencies can be determined and appropriate changes can be made immediately.
The main goal of summative evaluation is to prove, once the course is finished, that the performed training had a positive effect. For that, Kirkpatrick's training evaluation model is used (Level 1: Reaction, Level 2: Learning, Level 3: Behavior, Level 4: Results).
Summative evaluation helps find answers to the following questions:
Is continuing the learning program worthwhile?
How can the learning program be improved?
How can the effectiveness of training be improved?
How to ensure the training corresponds to the learning strategy?
How can the value of the training be demonstrated?
Carrying out evaluation following the Kirkpatrick model is time-consuming and not always cheap, but it provides valuable insight into whether it is worthwhile to continue a training program and whether it will deliver the expected results and earn back the money spent on it, so that organizations can make the correct choice.
In addition, this model helps gauge the effectiveness of the training department, and its alignment with the organization’s goals. Some companies neglect to perform third and fourth level evaluation, contenting themselves with analysis on the basic reaction level. However, this denies them the benefits of a clear understanding of the effectiveness and usefulness of the conducted training.
Summative evaluation helps in getting organizations on the right track, even if the conducted training is found to have been of substandard quality. It enables you to correct past mistakes and improve the training, so that it may better benefit the next group of learners.