Skip to main content

Educator, applied linguist, language tester.

I genuinely haven't got much to say in response to this week's prompt (Week 5: Is community learning an invasive species). I am reminded though of what I posted last week: How does the 'self-replicating' aspect of rhizomatic learning deal with self-replicating bad ideas?

It seems that rhizomatic or not, self-replication is problematic. However, if is a good example of rhizomatic learning, then I'm not sure if I see much self-replication happening.

Conclusion: I still don't get rhizomatic learning.

#rhizo15 week 4: Not the disappearing teacher

6 min read

My response this week is really a personal reflection on my journey as a teacher. From being inspired by 'learner-centredness' to heutagogy, it seems that the more I buy into being the 'guide on the side', the more students become dissatisfied with me. As much as I believe that students aren't always the best judge of what's beneficial for them, especially in the long run, I also believe that turning them off my classes isn't exactly the way to help them learn.

For years now, I've been trying to 'facilitate' rather than 'teach', but I don't think I've ever found that elusive balance of promoting learner independence but yet making students feel that they are 'learning' something (which often means that they feel that I am teaching). Worse is that whenever I think I've hit on the right balance, I get discouraged by negative student evaluations. I now dread reading them, which is terrible for someone who genuinely believes feedback is a good thing.

The irony is that I don't think I do what I think is nearly enough to get students to take ownership of their learning. I like to think that if I can just find the right formula (flipped learning, anyone?), I could do this and more with full student buy-in, all within the few months I have with my classes. But I also have a strong suspicion that I don't have the right personality or skillset or that mysterious good-teacher x-factor that could carry this off.

My current stance is that students as a rule are just starting off on this journey of self-directed learning, and pushing them too hard, too fast just won't work for me and/or the students that I teach. I do need to be the sage on the stage still. But part of my job as the sage is to persuade them that they can be sages sometimes too, to others and to themselves.

Inevitably, this persuasion will look like not-teaching to some. I can't prevent this totally, but as part of my development as a teacher, I am trying to find ways of minimising it. So recently, for instance, I've been working on the idea of feedback as a dialogue. We often complain of students not participating in class, in discussion forums, etc., and the reason often cited is that they don't find the topics engaging. But surely they would find their own work engaging? (Some won't, and the selfish teacher in me argues that it's outside of my remit to fix that.)

This semester I was gifted with a tiny class of 9, and I'm experimenting with making the assignments more formative, by pushing them to start thinking and talking about and planning their papers from day 1. Students tend to equate teacher talk with teaching, and I want the teacher talk to be part of a dialogue around their work right from the start.

I've discovered that I need to push a bit harder at the start, so that students don't give into the temptation of working last minute. I can also tell that if I want this to scale, I need to give students more help in being better 'sages' to each other, probably by starting them earlier on developing what Gee calls learners' appreciative systems, by getting them to analyse (and hopefully internalise) what makes a good paper tick. If I teach this again, I will have real student models and real feedback for the class to work with. I've tried this with other classes in the past, but never foregrounding it, which I think made it far too forgettable and disconnected with their own work. Co-constructing a rubric should also come easier if they develop such an appreciative system first.

I guess what I'm saying is that, for now, this teacher is nowhere near fading into the background to pop out only when needed, much less disappear. I haven't given up on heutagogy. But I also recognise how crucial trust is, not just in making feedback work, but also in convincing students that I know what I'm doing and that I truly have their best interests at heart. I will never be that warm and fuzzy and 'natural' teacher because that's just not who I am, so the trust building will take more mindful effort on my part.

This trust building and dialogue making can only really work at scale if we throw certain institutional rules out of the window. For instance, the general rule at another institution of not 'helping' students with their assignments by discussing them in class. Instead we are expected to write copious feedback for final submissions without any expectation of a response. For a teacher, this becomes soul-numbing work. This misguided notion of 'fairness' does nothing for learners and learning, instead reinforcing the idea that teachers are out to get them.

Granted, formative feedback to a class of 40 or more is a lot of work too. Which is why this phobia of 'collusion' needs to go to. Why talk about collaborative learning when students are warned against reading classmates' drafts to give feedback, for fear of 'accidental collusion'? If the plagiarism software highlights matches, are teachers unable to use their superior judgement and know better?

This approach won't be the 'magic' formula for me (I don't think that is one). I just have to take things one semester at a time, as I always have. It's that or give up teaching. We often complain about teacher education being inadequate, but perhaps its true inadequacy is in not preparing teachers to learn on the job in a way that's unstructured, self-directed, connectivist and even rhizomatic. We aren't prepared to deal with and learn from the uncertainties and the setbacks, or disabused (sufficiently) of the notion that there's a 'magic' formula or one right answer. The way we are usually evaluated doesn't take this into account either. It's no wonder that we struggle to prepare our students for the same journey. (How does the 'self-replicating' aspect of rhizomatic learning deal with self-replicating bad ideas?) 

ATC21s week 3: Assessing Collaborative Problem Solving skills

3 min read

I'll start with a note that I've been wondering how the developmental progression scales are exactly novel. Today it dawned on me that they are indeed similar if not identical to the kind of proficiency scales so common in language testing. I guess it's never occurred to me that there are any other ways to measure development and progress in learning. Anyway, for this reason, I wish there had been more of an emphasis on criterion referencing, since the use of such a scale can easily 'devolve' into norm referencing.

This week we learn about using the progression scales to observe students in CPS and find their ZPD -- like using a systematic observation scheme. We can use different colour highlighters with the printed observational framework to mark where the student is at each observation, to see if the student has progressed.

My thoughts at this point are similar to last week's. The previous two weeks now really do seem like an overly long preamble to this practical lesson. Was there a need to explain in so much detail how the scale was developed? And would scales developed using such computer-based puzzles work for other (maybe more authentic) kinds of CPS?

We are shown a 'case study' called Laughing Clowns, which is just the same sort of computer-based puzzle. We are supposed to watch the video of the task being performed and practise using the observational framework. We are told what skill examples to watch for. Okay, given that teachers won't have this sort of puzzle on this sort of software for the students to work on to develop CPS (and should they use them anyway?) I wonder how useful this practice is. I've done unstructured observations of students working collaboratively, and it's nothing so neat and clean with the polite text-based turntaking that this learner dyad demonstrated in the video. This difficulty is acknowledged in the video, but clearly these examples are meant to help us 'start easy'. (Whether we can progress in our own developmental progression scale as observers would be up to us, I suppose.) 

The second practice case study is a more difficult puzzle called Olive Oil. The third is called Balance, and the last is called Game of 20 (a mathematical game). Again, they are puzzles rather than problems per se -- not very realistic or interesting to me. It's ironic that the first hands-on week should be the most boring to me (but as I tweeted before, I am probably not the target audience).

Writing my response to this week's prompt made me think of heutagogy again. While the CPS progression scale is undoubtedly useful (models are always useful even if not accurate), assessing CPS this way (give students a puzzle, observe them solve it, mark progression on a scale) is not very heutagogical or 21st century, IMO. How far away is this from the old-hat performance assessment that's pretty common in language assessment? How much of this is 'old wine in new bottles'? Where is the peer and self assessment, space for out-of-class 'informal' learning, authenticity and real-world relevance? I'm not saying that this model leaves no room for all that. But I think they should be central to assessment in the digital age, not nice-to-have add-ons. Backwash is a result not just of what we assess, but how we assess, too.

#rhizo15 week 3: The 'myth' of content

3 min read

I don't feel nearly as inspired by this week's topic as I did by last week's but I'll try anyway!

First, a warning that I don't have any interesting anti-content or un-content things to say. I saw this question to be about prescription, about top down curriculum development. There's been a lot of talk about negotiated curricula which would be a very learner-centred thing to do but requires a lot of the teacher -- breadth if not depth of knowledge, a well-developed capacity for self-directed learning, adaptability, willingness to admit that she doesn't know everything, identification as a learner herself -- the 21st century teacher?

So I'm circling back to heutagogy, which is characterised by 

  • Recognition of the emergent nature of learning and hence the need for a ‘living’ curriculum that is flexible and open to change as the learner learns;
  • the involvement of the learner in this ‘living’ curriculum as the key driver

The elements of a heutagogical approach are:

  • Learner-defined learning contracts
  • Flexible curriculum
  • Learner-directed questions
  • Flexible and negotiated assessment
  • Reflective practice
  • Collaborative learning  

Can we learn together but with each of us following our own curriculum? Could we design our own assessments? What assessment literacy/literacies are needed? What demands does this place on the teacher for their support?

An emerging theme in my own research is the completely obvious observation that the assessment of digital literacies requires digitally literate teachers. Similarly obvious: heutagogical learning requires heutagogical teachers (or facilitators if one prefers). And it's probably my own preoccupation talking, but it looks like assessment literacy is a big (missing) piece of the puzzle here. Can we expect learners to design their own assessments if they lack assessment literacy? Can we expect teachers to guide learners if they lack the same?

I've mentioned that I find 's conceptualisation of collaborative problem solving problematic. This CPS construct is the basis of their CPS developmental progression scale, which teachers can use to observe their students in the process of CPS, in order to assess their CPS development. I'll say more about this in my post tomorrow. Here I just want to make the point that this approach to assessing 21st century skills is not heutagogical. Even when not 'curriculum-based' as claimed, the curriculum or content here is in fact embedded in the assessment, and this is still top-down, because the learners play no part in developing it.

I am not making any claims to teaching heutagogically here. I'm still experimenting with ways of doing that within the constraints of institutions and the limited time I have with any one student (because university courses are so short). But it's certainly something to think about: how can I help my students understand assessment in the digital age by designing ours together? This semester my language assessment students and I were able to co-construct our assessment rubric and negotiate deadlines. Next semester (if I get to teach this again), could the students play a bigger role in designing the assessments and therefore determining the content?

#rhizo15 week 2: Learning is uncountable, so what do we count?

4 min read

This isn't one of my scheduled posts for thematic tweets, and has nothing to do with as such. It's a little something for me to get my feet wet with . I've been hesitant to get started with because I doubted my ability to contribute something. Given my issues with the much easier , though, I thought I should try harder with , and balance my first real xMOOC experience with a cMOOC one.

As I type this, week 3 has already started, but I'll post my week 2 contribution anyway -- it was hard enough to come up with! Here's Dave's week 2 prompt. You'll note that it's conveniently right up my assessment alley. I don't know if I can respond to week 3's the same way!

Warning: my response is a rough, incomplete thing but maybe this is par for the course for rhizo learning. (I should confess here that I am ambivalent about rhizomatic learning as a theory, and hope that this experience helps to sort out my ideas about it.)

Okay. So we can't count learning. But I've always accepted this. Psychometricians working with Item Response Theory talk about latent traits: 'latent is used to emphasize that discrete item responses are taken to be observable manifestations of hypothesized traits, constructs, or attributes, not directly observed, but which must be inferred from the manifest responses' (Wikipedia). 

So when we assess, we are not measuring actual traits (or abilities) but the quality of evidence of such. It's all inferred and indirect, so we can't measure learning in the sense of holding a ruler up to it ('let's look at how much you've grown!').

Also learning happens continuously -- we can't start and stop at will. We can't measure it, even indirectly, as you might temperature, in real time. By the time the test finishes and is marked or feedback given, learning has already moved on.

So we never measure learning per se. As Lisa says, it's only symbolic. It's just a useful fiction.

But perhaps Dave's question is not about measuring quality of such tangible evidence? At least the conventional kind?

If it isn't about product, is it about process, as some teachers already do assess?

Are we talking about measuring 21st century 'skills' like CPS (see previous post)? has very cleverly broken down CPS into more easily measurable bits, but when a construct is broken down like that, its integrity tends to suffer (something I forgot to include in my previous post). Is it about measuring literacies (situated social practices), as I'm attempting to tackle in my study? Learning dispositions?

But tangible evidence is also required to 'see' all the above. Are we talking of true 'unmeasurables', if psychometricians admit to any? What might they be?

Maybe it's about assessment that isn't externally imposed -- self assessment? How do we activate learners as owners of their own learning, as per Wiliam's framework of formative assessment? How do we make reflective learning second nature?

How can we give self assessment currency, given stakeholders' obsession with reliability of measurement and 'fairness'? How can we give it validity? And have people understand and accept that validity?

Which leads to heutagogy. We have to be good at it to cultivate it in others; our education ministry says teachers should cultivate Self-directed Learning capabilities in our learners, but how do they cultivate it in themselves? How can we be self directed about SDL?

How about we throw out quantitative measures? No counting! Maybe that's how we throw out the comparing and ranking of norm-referenced assessment that people tend to default to (I'm not sure how many participants really got criterion-referencing.)

How about we become ethnographers of learning? Help learners become autoethnographers of their own learning? The kind that's mostly, if not 100%, qualitative. (Before you say that the average teacher has too much to do, recall that she has an entire class of potential research assistants.) I'm sure this is (as usual) not an original idea. Do you know anyone who's tried it?

'Not everything that can be counted counts, and not everything that counts can be counted.' - William Bruce Cameron