Skip to main content

ATC21s week 3: Assessing Collaborative Problem Solving skills

3 min read

I'll start with a note that I've been wondering how the developmental progression scales are exactly novel. Today it dawned on me that they are indeed similar if not identical to the kind of proficiency scales so common in language testing. I guess it's never occurred to me that there are any other ways to measure development and progress in learning. Anyway, for this reason, I wish there had been more of an emphasis on criterion referencing, since the use of such a scale can easily 'devolve' into norm referencing.

This week we learn about using the progression scales to observe students in CPS and find their ZPD -- like using a systematic observation scheme. We can use different colour highlighters with the printed observational framework to mark where the student is at each observation, to see if the student has progressed.

My thoughts at this point are similar to last week's. The previous two weeks now really do seem like an overly long preamble to this practical lesson. Was there a need to explain in so much detail how the scale was developed? And would scales developed using such computer-based puzzles work for other (maybe more authentic) kinds of CPS?

We are shown a 'case study' called Laughing Clowns, which is just the same sort of computer-based puzzle. We are supposed to watch the video of the task being performed and practise using the observational framework. We are told what skill examples to watch for. Okay, given that teachers won't have this sort of puzzle on this sort of software for the students to work on to develop CPS (and should they use them anyway?) I wonder how useful this practice is. I've done unstructured observations of students working collaboratively, and it's nothing so neat and clean with the polite text-based turntaking that this learner dyad demonstrated in the video. This difficulty is acknowledged in the video, but clearly these examples are meant to help us 'start easy'. (Whether we can progress in our own developmental progression scale as observers would be up to us, I suppose.) 

The second practice case study is a more difficult puzzle called Olive Oil. The third is called Balance, and the last is called Game of 20 (a mathematical game). Again, they are puzzles rather than problems per se -- not very realistic or interesting to me. It's ironic that the first hands-on week should be the most boring to me (but as I tweeted before, I am probably not the target audience).

Writing my response to this week's prompt made me think of heutagogy again. While the CPS progression scale is undoubtedly useful (models are always useful even if not accurate), assessing CPS this way (give students a puzzle, observe them solve it, mark progression on a scale) is not very heutagogical or 21st century, IMO. How far away is this from the old-hat performance assessment that's pretty common in language assessment? How much of this is 'old wine in new bottles'? Where is the peer and self assessment, space for out-of-class 'informal' learning, authenticity and real-world relevance? I'm not saying that this model leaves no room for all that. But I think they should be central to assessment in the digital age, not nice-to-have add-ons. Backwash is a result not just of what we assess, but how we assess, too.