Skip to main content

ATC21s week 4: Teaching collaborative problem solving

8 min read

This week is the final week of video lectures, so I'll end my reflective posts with today's too (I am not doing the assignment).

Video 4.0 discusses the differences between collaboration and cooperation. Collaboration is also not group work (though I don't see how group work can't sometimes end up being collaborative in their sense of the word?) Collaboration is actually defined very specifically here, to the extent that I see why it might be hard to find authentic problems that fit this model perfectly for the classroom.

4.1 discusses inter and intra individual differences among learners. Individuals develop at different rates different aspects of the CPS construct. As an example of how the teacher might cater to such a diverse group, a tertiary level CPS task is described. I'm not very clear about the objective of this task so I won't go into it here. It's highlighted that the same task can be scaffolded differently for different students depending on their cognitive and social development. But what if within the group, members are at different stages of development? How can we scaffold the task differently for each person if they are all working together on the same task? In 4.3 the researcher interviewed mentions teachers getting students to read up on different topics so that they each bring something different to the problem solving. I suppose something like this could be done, though it smacks of too much teacher engineering -- is this still collaboration and not cooperation?

4.2 is an interview with Griffin and Care. Care explains that the kind of CPS tasks used in their study only tap into the 'tip of the iceberg' as far as the CPS construct is concerned. This indicates that they do indeed believe that such puzzles (as I call them) measure the same construct as more complex (more authentic?) tasks. I remain rather dubious about this. Reading comprehension is mentioned here for comparison:

whether it's multiple choice or some open item questions and what we get from that assessment is some indication of the skill

But of course MCQ and open-ended items are likely to be differently valid, and this isn't trivial -- one may tap into more of the iceberg-construct than the other. Or tap into a different iceberg! My point is that construct validity cannot be assumed. It could be that ATC21s has research showing their CPS and real-life CPS are the same construct of course, and I've missed an article somewhere.

Griffin then says:

The issue that you spoke about with the reading test, we've managed over a hundred years or so, to become very good at that kind of assessment, and in interpreting that. And we know now that that one piece of text and the two or three questions that are associated with it, are only a sample of what we could do. So, we build more and more complicated texts. Yeah. And more difficult questions and we address higher and higher skills in the test. So over a 40 item test of a reading comprehension we go from very simple match the word to this picture through to judging an author's intention. Yeah. And so behind the multiple choice questions on a piece of paper there's also a lot of complex thinking that goes on and there's, behind that, there's the idea of a developmental progression, or a construct that we're mapping, but the teacher, the student never sees that until after the test has been developed, interpreted and reported.

Well now. This is such an oversimplification of what we know about assessing reading that it's at best a poorly chosen analogy.

It's then claimed that while the learners are enjoying their games, the researchers are actually assessing their CPS development. Putting aside the fact that teachers likely won't have access to such games (as I've already pointed out previously), I'm not sure about the suggestion here that this is a good example of assessing through games. It's been awhile since my gaming days but I would never consider these good games. I really think you need good games for game-based learning and the same goes for assessment.

a reading test is often read a passage, look at a question, choose an alternative out of four possible alternatives by pressing a button or ticking a box. What we have done enables a much more complex view of that. We can now tap into what's going on in the background behind the student's reading comprehension, what they're thinking while they're trying to work out what alternative they choose.

Hmm. I think they should just stop referring to reading assessment unless they've really got some novel reading assessment along the same lines as their CPS 'games' that somehow has never been disclosed to language testers.

And finally we come to what I see as the crux of it all:

You know, one of the challenges for us still is that, we don't know yet whether the skills that we're picking up will generalize to real life situations. That's one of the big issues. And in part we're, we're hampered and we're constrained, because of the nature of how you pick up the assessment data. You know, because, if we're talking about the sorts of problems to which we want to bring collaborative problem solving, like big problems, or say, global warming, The issue is that in the school context what you typically give to student is well defined problems. Problems that they've given a lot more scaffolding to work through, they're given structure to work through. If we go too far down that path, too much structure and too much scaffolding, they won't learn the particular underlying skills that we need that they can then generalize to take to the big problems. So there's some real issues in our assessments.

I don't know if I understand this correctly. Are they saying that their CPS tasks are well-defined because this is a constraint of school? And that this can be compensated for by providing less structure and scaffolding? IMO schools can definitely do CPS differently if they want to. And they can't do it ATC21s style anyway, with their kind of electronic games. It seems to me that the well-definedness of their games is a constraint of their research design, not of schools, thus that part about being constrained by 'the nature of how you pick up the assessment data'. But assessment data can be 'picked up' in different ways.

4.3 is an interview with a researcher who is working with teachers on implementing CPS in their schools. This is an interesting account that I think teachers on the course would want to know more about. At the start, the researchers says that the teachers had their students do the online CPS tasks so that they had a baseline to work with. Could all schools do this? What if they couldn't? In 4.4 we hear from the 2 schools involved in the study, and again while interesting, it would be even more interesting to hear from schools that implement this without the technological support of the research team.

4.4 is a recap, with some future directions. It's pointed out that teachers have to be effective at collaborating themselves if they want to teach it to their students, and I wholeheartedly agree. That said, if they are taught this in pre-service in the same tip-of-the-iceberg way, I'm not sure if they would be prepared for real-world collaboration in the school.

There are also some extra videos available I think just this week, under resources. One of the them is called 'Learning in digital networks', and it suggests that this sort of CPS task gives learners a start to their ICT literacy or learning in digital networks. I really don't know about this. Given the rich digitally networked environment kids live in (at least in developed countries), do they really need to start with something like this? Do we have to train them on a toy 'internet' before they know how to learn on the real one? Chances are many already are learning and collaborating on the real internet.

This highlights the difference between this course's orientation to 21st century competencies and mine. ATC21s takes a more cognitive, more skills-based, more measurement-centric approach that while contributing a great deal to our understanding of such competencies, may also be limited in usefulness in transforming learning in the clasroom. I like that the ATC21s team are clearly more interested in learning and development, but I think a more social practice approach (to competencies, to assessment) is better aligned with formative aims and better able to achieve them. This is probably my research bias talking, so I'll stop here.

I hope you've enjoyed my reflective posts on the ATC21s MOOC! Next week, something new.