Re-complexifying learning measurements

Bouncing around in my head: Access, agency, analytics, API, assessment, challenge, competencies, comprehensive, complex, core business, consultation, data, decomplexification, design, intentional, integrated, learnification, outcomes, roadmaps, plans,  student-centered, transparency… I’m sorry if this rambles a little. I feel that just beyond my  grasp, there is a place where these ideas coherently intersect. And so  I bumble on…

________________________________________

One  common complaints I hear about trying to measure learning is that in the process we run the risk of decomplexification (as wonderfully described by Amy Collier).

Is the answer to stop trying to measure learning altogether? I don’t think so. Instead, we need acknowledge our assumptions and knowledge gaps including that learning a complex and intensely personal. We then need to seek to  re-complexify what we are both measuring and teaching/facilitating in the hopes of  achieving a more better (or less bad ) approach.

A while ago, I did some work developing a scenario-based approach to learning. Part of my executive spiel was an explanation that decomposing a skill into sub-components, teaching them each separately and then “Putting them all together” at the end is not aligned with how the human brain actually learns anything.

Traditional trainingThe good news was that my audience was usually operational folks who intuitively knew this to be true. “Yeah,” they would say, “they really don’t know anything when they come out of training. We work with them and once they start working and usually in about 3-6 months, the ones left have learned what to do.”

The bad news for me as the Learning Consultant was that they usually followed that statement by saying,”So we think training is a waste of time. People either have the skills to figure it out, and they will or they don’t and they’ll quit (or we’ll let them go).”

Our goals was to have 80% of our metrics in green. Having red and yellow metrics above green is a REALLY bad sign.
Our goals was to have 80% of our metrics in green. Having red and yellow metrics above green is a REALLY bad sign.

The even worse news was that usually when I got involved in a project almost  no one knew what to do, either because the turnover had been so high no one stuck around long enough to learn it, new skills were needed regularly, it was a new line of business to us all, or the barriers to entry had been set so high that almost all applicants were screened out (on one particular program the applicant rate of acceptance to be a call centre agent trainee was lower than the acceptance rate at MIT).

What then was the solution when training was desperately required? Recomplexification of both the training and our learning assessments.  We tossed trainees into complex “real-life” scenarios, allowed them to fail and encouraged them to work together to develop solutions to ill-defined problems. We used the analogy of a flight-simulator: You have to crash a lot of planes in order to learn how to fly.

We introduced the complex skill first. They had to be able to do all the bits.
We introduced the complex skill first. They had to be able to do all the bits.

When we were allowed, we threw out the multiple choice test and “check your knowledge” assessments and replaced them with non-graded debriefs (What went well? What else went well? What could we have done better?). We did need to do a graded role-play at the end of the training, but the ultimate measures of our success? A reduction of angry customers, less repeat callers and less agents quitting once on the actual job.

Lowest performers outperformed the site average after training.
Lowest performers outperformed the site average after training.

We didn’t get it all right out of the gate. It turns out that our trainers and managers needed a lot of support to be successful. How they were evaluated also needed some modification as did the dress code in some locations. (Apparently high heeled shoes were not conducive to walking around a class all day long facilitating group learning and regularly checking on individual progress.)  Classroom configurations had to be changed to support the approach and where they couldn’t be changed, we had to modify and adapt. Hiring profiles had to be changed to get more people into class. Many elements worked against our efforts inside the classroom. We had to complexify our thinking about our training systems which turned out to be a lot of work changing a lot of supporting structures.

Now I work in a post-secondary institution and I hear, “But that was training and this is learning.” We did base our training on learning theory that was not specifically designed for training which I insisted in sharing in at least the appendix of pevery presentation (though the execs rarely cared).

 Theory

Nonetheless, I’m willing to accept this difference between training and learning until the point that the talk turns to assessments involving multiple choice, short answer, and other forms of decomplexified measurement. If post-secondary learning is more complex, then shouldn’t the assessment methods also be more complex? Don’t we have the experts in fields teaching in post-secondary institutions because they are the experts and understand the complexities that others don’t?

I have seen a lot of excellent examples of complex assessment in post-secondary: Development of a website (using code not tools) for a real client, practicums, hacking wars in a virtual space, collaborative article development and on and on. What these examples tend to have in common with one another is that they  treat learning as something complex and acknowledge the challenges in assessing it. They can involve students defining what success in learning looks like for them and developing their own plans to achieve it. They often include elements of openness. They tend to offer high levels of  formative feedback and interaction.

But it appears that there are also lots of  less good examples. Lisa Loutzenheiser, for example, described her experiences in an Open Letter to College Professors.

I don’t however think that we can leave it up to college professors to address these challenges alone. Measuring more complex learning may not easily fit into the current constructs of many systems. If we push instructors to develop measurable learning outcomes aligned with our current system, decomplexfication is likely the path of least resistance.

Developing, supporting and measuring complex  learning is by definition really hard, perhaps close to impossible. Moreover, doing stuff that is harder often requires more work, from both students and instructors. Justifying why it is important and worth pursuing requires even more work. Re-complexification of  learning measures will require a team effort.

__________________________________

**Note:  Complexified learning measures should not be confused with complex grading schemes. In at least one case I’m aware of, moving to assessing more complex learning required moving to a simplified Pass/Fail grading structure. The instructor reported that despite simplifying the grading scheme, the quality of the student work increased.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *