Skip to main content

Educator, applied linguist, language tester.

hsiaoyun.net/

groups.diigo.com/group/assessment-literacy

#rhizo15 week 4: Not the disappearing teacher

6 min read

My response this week is really a personal reflection on my journey as a teacher. From being inspired by 'learner-centredness' to heutagogy, it seems that the more I buy into being the 'guide on the side', the more students become dissatisfied with me. As much as I believe that students aren't always the best judge of what's beneficial for them, especially in the long run, I also believe that turning them off my classes isn't exactly the way to help them learn.

For years now, I've been trying to 'facilitate' rather than 'teach', but I don't think I've ever found that elusive balance of promoting learner independence but yet making students feel that they are 'learning' something (which often means that they feel that I am teaching). Worse is that whenever I think I've hit on the right balance, I get discouraged by negative student evaluations. I now dread reading them, which is terrible for someone who genuinely believes feedback is a good thing.

The irony is that I don't think I do what I think is nearly enough to get students to take ownership of their learning. I like to think that if I can just find the right formula (flipped learning, anyone?), I could do this and more with full student buy-in, all within the few months I have with my classes. But I also have a strong suspicion that I don't have the right personality or skillset or that mysterious good-teacher x-factor that could carry this off.

My current stance is that students as a rule are just starting off on this journey of self-directed learning, and pushing them too hard, too fast just won't work for me and/or the students that I teach. I do need to be the sage on the stage still. But part of my job as the sage is to persuade them that they can be sages sometimes too, to others and to themselves.

Inevitably, this persuasion will look like not-teaching to some. I can't prevent this totally, but as part of my development as a teacher, I am trying to find ways of minimising it. So recently, for instance, I've been working on the idea of feedback as a dialogue. We often complain of students not participating in class, in discussion forums, etc., and the reason often cited is that they don't find the topics engaging. But surely they would find their own work engaging? (Some won't, and the selfish teacher in me argues that it's outside of my remit to fix that.)

This semester I was gifted with a tiny class of 9, and I'm experimenting with making the assignments more formative, by pushing them to start thinking and talking about and planning their papers from day 1. Students tend to equate teacher talk with teaching, and I want the teacher talk to be part of a dialogue around their work right from the start.

I've discovered that I need to push a bit harder at the start, so that students don't give into the temptation of working last minute. I can also tell that if I want this to scale, I need to give students more help in being better 'sages' to each other, probably by starting them earlier on developing what Gee calls learners' appreciative systems, by getting them to analyse (and hopefully internalise) what makes a good paper tick. If I teach this again, I will have real student models and real feedback for the class to work with. I've tried this with other classes in the past, but never foregrounding it, which I think made it far too forgettable and disconnected with their own work. Co-constructing a rubric should also come easier if they develop such an appreciative system first.

I guess what I'm saying is that, for now, this teacher is nowhere near fading into the background to pop out only when needed, much less disappear. I haven't given up on heutagogy. But I also recognise how crucial trust is, not just in making feedback work, but also in convincing students that I know what I'm doing and that I truly have their best interests at heart. I will never be that warm and fuzzy and 'natural' teacher because that's just not who I am, so the trust building will take more mindful effort on my part.

This trust building and dialogue making can only really work at scale if we throw certain institutional rules out of the window. For instance, the general rule at another institution of not 'helping' students with their assignments by discussing them in class. Instead we are expected to write copious feedback for final submissions without any expectation of a response. For a teacher, this becomes soul-numbing work. This misguided notion of 'fairness' does nothing for learners and learning, instead reinforcing the idea that teachers are out to get them.

Granted, formative feedback to a class of 40 or more is a lot of work too. Which is why this phobia of 'collusion' needs to go to. Why talk about collaborative learning when students are warned against reading classmates' drafts to give feedback, for fear of 'accidental collusion'? If the plagiarism software highlights matches, are teachers unable to use their superior judgement and know better?

This approach won't be the 'magic' formula for me (I don't think that is one). I just have to take things one semester at a time, as I always have. It's that or give up teaching. We often complain about teacher education being inadequate, but perhaps its true inadequacy is in not preparing teachers to learn on the job in a way that's unstructured, self-directed, connectivist and even rhizomatic. We aren't prepared to deal with and learn from the uncertainties and the setbacks, or disabused (sufficiently) of the notion that there's a 'magic' formula or one right answer. The way we are usually evaluated doesn't take this into account either. It's no wonder that we struggle to prepare our students for the same journey. (How does the 'self-replicating' aspect of rhizomatic learning deal with self-replicating bad ideas?) 

#rhizo15 week 2: Learning is uncountable, so what do we count?

4 min read

This isn't one of my scheduled posts for thematic tweets, and has nothing to do with as such. It's a little something for me to get my feet wet with . I've been hesitant to get started with because I doubted my ability to contribute something. Given my issues with the much easier , though, I thought I should try harder with , and balance my first real xMOOC experience with a cMOOC one.

As I type this, week 3 has already started, but I'll post my week 2 contribution anyway -- it was hard enough to come up with! Here's Dave's week 2 prompt. You'll note that it's conveniently right up my assessment alley. I don't know if I can respond to week 3's the same way!

Warning: my response is a rough, incomplete thing but maybe this is par for the course for rhizo learning. (I should confess here that I am ambivalent about rhizomatic learning as a theory, and hope that this experience helps to sort out my ideas about it.)

Okay. So we can't count learning. But I've always accepted this. Psychometricians working with Item Response Theory talk about latent traits: 'latent is used to emphasize that discrete item responses are taken to be observable manifestations of hypothesized traits, constructs, or attributes, not directly observed, but which must be inferred from the manifest responses' (Wikipedia). 

So when we assess, we are not measuring actual traits (or abilities) but the quality of evidence of such. It's all inferred and indirect, so we can't measure learning in the sense of holding a ruler up to it ('let's look at how much you've grown!').

Also learning happens continuously -- we can't start and stop at will. We can't measure it, even indirectly, as you might temperature, in real time. By the time the test finishes and is marked or feedback given, learning has already moved on.

So we never measure learning per se. As Lisa says, it's only symbolic. It's just a useful fiction.

But perhaps Dave's question is not about measuring quality of such tangible evidence? At least the conventional kind?

If it isn't about product, is it about process, as some teachers already do assess?

Are we talking about measuring 21st century 'skills' like CPS (see previous post)? has very cleverly broken down CPS into more easily measurable bits, but when a construct is broken down like that, its integrity tends to suffer (something I forgot to include in my previous post). Is it about measuring literacies (situated social practices), as I'm attempting to tackle in my study? Learning dispositions?

But tangible evidence is also required to 'see' all the above. Are we talking of true 'unmeasurables', if psychometricians admit to any? What might they be?

Maybe it's about assessment that isn't externally imposed -- self assessment? How do we activate learners as owners of their own learning, as per Wiliam's framework of formative assessment? How do we make reflective learning second nature?

How can we give self assessment currency, given stakeholders' obsession with reliability of measurement and 'fairness'? How can we give it validity? And have people understand and accept that validity?

Which leads to heutagogy. We have to be good at it to cultivate it in others; our education ministry says teachers should cultivate Self-directed Learning capabilities in our learners, but how do they cultivate it in themselves? How can we be self directed about SDL?

How about we throw out quantitative measures? No counting! Maybe that's how we throw out the comparing and ranking of norm-referenced assessment that people tend to default to (I'm not sure how many participants really got criterion-referencing.)

How about we become ethnographers of learning? Help learners become autoethnographers of their own learning? The kind that's mostly, if not 100%, qualitative. (Before you say that the average teacher has too much to do, recall that she has an entire class of potential research assistants.) I'm sure this is (as usual) not an original idea. Do you know anyone who's tried it?

'Not everything that can be counted counts, and not everything that counts can be counted.' - William Bruce Cameron

ATC21s week 2: A closer look at 21st century skills: collaborative problem solving

7 min read

This week I'm somewhat distracted by an upcoming trip to Bangkok to present at the 2nd Annual Asian Association for Language Assessment Conference. This is the first time I am formally presenting on my study, so I'm quite nervous! Fortunately I was able to squeeze in some time for week 2 of .

Here's a quick summary of this week's lesson:

1. What is collaborative problem solving (CPS)? There are existing problem solving models (cited are Polya, 1973, and PISA, 2003/2012), but they do not include the collaborative component. Therefore ATC21S has come up with their own:

  • Collect and share information about the collaborator and the task
  • Check links and relationships, organise and categorize information
  • Rule use: set up procedures and strategies to solve the problem using an “If, then..” process
  • Test hypotheses using a “what if” process and check process and solutions

The CPS construct is made up of social skills and cognitive skills.

2. Social skills are participation, perspective taking and social regulation skills. These can be further unpacked:

  • Participation: action, interaction and task completion
  • Perspective taking: responsiveness and audience awareness
  • Social regulation: Metamemory (own knowledge, strengths and weaknesses), transactive memory (those of partners), negotiation and responsibility initiative

There are behavioural indicators associated with each of these elements. (At this point, I was pretty sure that Care and Griffin don't mean to suggest that teachers conduct Rasch analysis themselves, but rather use already developed developmental progressions.)

3. Cognitive skills are task regulation, and knowledge building and learning skills:

  • Task regulation: problem analysis, goal-setting, resource management, flexibility and ambiguity management skills, collects information, and systematicity
  • Knowledge building and learning: relationships, contingencies and hypothesis Again, each element has associated indicators.

4. We come back to the developmental approach that integrates the work of Rasch, Glaser and Vygotsky. Teachers need a framework that they can use to judge where their students are in their CPS development. There are existing ones (such as the ubiquitous Bloom's), but none are suited to measuring CPS skills. So what we need is a new empirically derived framework that allows teachers to observe students in CPS action and judge where they are.

5. Empirical progressions are explained, and examples such as PISA and TIMMS given. We are then presented with the progression that ATC21S has developed for CPS. The table is too large to reproduce here, but essentially it expands all the elements in 3 and 4 into progressions so that you end up with five scales. 

 

Impressive right? Except I'm not quite sure about the tasks they used to developed this. The example they showed was of two students connected by the internet and chatting by typing, attempting to solve what appears to be more of a puzzle than a problem. That is, the sort of problem teachers cook up to test students' intellectual ability (shades of ?) The 2nd volume of the book series actually has a chapter that discusses this in more detail and seems to confirm that they used puzzles of this sort. I understand of course that doing it in this way makes it easier to collect the sort of data they wanted. But given that the tasks aren't very authentic, to what extent are they representative of the target domain? Are there issues of construct validity? I will need to read further, if there is available literature, before I make up my mind. It would be interesting, if not already done, to conduct a qualitative study using more authentic problems, more students per team, observation, artefact collection, (retrospective) interviews, and so on. You won't get the quantity of data as with the study but this sort of rich data could help us check the validity of the framework. It could also be of more practical value to teachers who actually have to teach and assess this without fancy software and a team of assistants.

I won't deny that I'm rather disappointed that Rasch measurement is really 'behind the scenes' here, though I'm not surprised. I can't help but wonder if it's really necessary to make Rasch appear so central in this course, especially since some of my classmates seem to misunderstand its nature. This is not surprising -- Rasch is not the sort of thing you can 'touch and go' with. There is some confusion about criterion referencing too (IMO it's hard to make sense of it without comparing it to norm referencing and explaining how they are used in assessment usually). ZPD is faring a little better, probably since it's familiar to most teachers. I am however surprised to see it occasionally referred to rather off-handedly, as if it's something that's easy to identify.

Would it make more sense to focus more on the practicalities of using an established developmental progression? It's too early to say I guess, but already quite a few of my classmates are questioning the practicality of monitoring the progress of large classes. This is where everyday ICT-enabled assessment strategies can come into play. I also hope to see more on how to make assessments really formative. I learnt from the quiz this week (if it was mentioned elsewhere I must have missed it) that assessments that are designed to measure developmental progression are meant to be both formative and summative. Okay, great, but IMO it's all too easy to miss the formative part completely without even realising it -- remember that an assessment is only formative if there's a feedback loop. The distinction between the two uses cannot be taken lightly, and there really is no point harping on development and ZPD and learning if we ignore how assessment actually works to make progress happen.

Which brings me to the assessment on this course. If you're happy with the quizzes so far you might want to stop reading here.

 

Diligent classmates may have noticed from my posts that I REALLY do not like the quizzes. Initially it was the first so-called self assessment that I took issue with. Briefly, its design made it unfit for purpose, at least as far as I'm concerned. After doing another 'self-assessment' for week 2 and the actual week 2 quiz, I'm ever more convinced that the basic MCQ model is terrible for assessing something so complex. It's quite ironic that a course on teaching and assessing 21C skills should utilise assessments that are assuredly not 21C. Putting what could be a paper MCQ quiz online is classic 'old wine in new bottle', and really we cannot assess 21C skills with 19C or 20C ways. I have written (to explain my own study) that:

... digital literacies cannot be adequately assessed if the assessment does not reflect the nature of learning in the digital age. An assessment that fails to fully capture the complexity of a construct runs the risk of construct under-representation; that is, being ‘too narrow and [failing] to include important dimensions or facets of focal constructs’ (Messick, 1996, p. 244).

Surely we cannot claim that the understanding of assessing and learning 21C skills is any less complex than 21C skills themselves? Of my initial findings, I wrote that:

We may be able to draw the conclusion that assessing digital literacies are 21st century literacies twice over, in that both digital literacies and the assessment thereof are new practices that share similar if not identical constituents.

Telling me that the platform can't do it differently is an unsatisfactory answer that frankly underlines the un-21C approach taken by this course. 21C educators don't allow themselves to be locked in by platforms. It seems that the course designers have missed out on a great opportunity to model 21C assessment for us. I'm not saying that it would be easy, mind you. But is it really possible that the same minds who developed an online test of CPS can't create better than the very average xMOOC?

Okay, I should stop here before this becomes an awful rant that makes me the worst student I never had. I am learning, really, even if sometimes the learning isn't what's in the LOs. And I will continue to persevere and maybe even to post my contrary posts despite the threat of being downvoted by annoyed classmates :P

Feedback

5 min read

It's Tuesday as I write this, and as I happen to be doing a workshop on feedback tomorrow, I thought I'd be lazy and share some of the key content as my Wednesday post on assessment. I've organised my session around the three categories of Why? - How? - What? (inspired by Shove, Pantzar and Watson's SPT (social practice theory) framework), before we give it a try as a class. The aim is to give effective feedback as efficiently as possible; as we all know, it's tiring and time-consuming work, and sometimes it feels like our efforts just disappear into a black hole!

 

Why feedback?

Feedback is integral to formative assessment, which, as we already know from Black & Wiliam, can result in significant learning gains, helps low achievers in particular, and can cultivate active and collaborative learners. It therefore supports self-directed learning and 21st century competencies.

 

How can we give effective feedback?

Here's a great image based on this article.

5 research-based tips for providing students with meaningful feedback

This work by rebe_zuniga is licensed under a Creative Commons Attribution 2.0 Generic Licence.

More tips I've gathered from various articles (including some tweeted by Dr Carless):

  1. Build trust: make learners feel safe to fail, so that they take risks, and allow us to see what help and feedback is needed
  2. Promote a growth mindset: as per Carol Dweck -- as Dylan Wiliam says 'smart is not something you are, smart is something you get'
  3. Develop a dialogue: instead of writing mini-essays learners might never read in earnest, engage our learners in a dialogue
  4. Forget the sandwich: the feedback sandwich can seem condescending or manipulative; be honest and constructive instead
  5. Focus on task, not ego: we don't need the sandwich to protect the learner's fragile ego if we focus on the task rather than the person
  6. Eliminate grades/marks: or delay releasing them if we can't -- research shows learners tend to ignore feedback if both are given
  7. Assess one criterion per task: we risk overwhelming the learner if we try to assess everything at once -- focus on one thing at a time, and let the learner know in advance so that they know where to direct their efforts
  8. Feed it forward: what next? how can the learner apply this feedback in future work?
  9. Make it actionable: can it be applied? or is it beyond the ability of the learner?
  10. Work less than the learner: resist correcting everything for the learner -- we want to encourage them to take responsibility and ownership, and to develop self-directed learning capabilities
  11. Cultivate feedback literacy: why is feedback important, and how do we use feedback to improve what we do?
  12. Activate peers: peer feedback can be more effective than ours, and learners learn twice when they give feedback, helping them internalise the qualities of a good performance and self-assess
  13. Share range of feedback: learners improve their awareness when they see what others have done well or poorly
  14. Incorporate regular reflection: reflection helps learners develop themselves as self-assessors and self-directed learners, and helps us understand better the kind of feedback our learners are in need of

 

What can we use?

I've thought of 10 tools but maybe you have more to suggest.

  1. Analytic rubrics/scoring: this is usually in the form of a grid, and breaks performance down into criteria
  2. Marking symbols: commonly used in assessing writing (e.g. SP = spelling error) 
  3. Master list of comments: keeping a list of frequent comments that we can 'recycle' by copying and pasting; this can include links to resources such as YouTube content
  4. Google Drive: the Swiss Army knife of digital feedback tools; easily build a feedback dialogue -- check out Doctopus which turbocharges what is already a powerful tool
  5. Voice recordings: can result in better uptake; easy on Google Docs with Kaizena (not so easy on Word)
  6. Google Forms: great for eyeballing answers collated onto a spreadsheet and quick individual comments as feedback; allows learners to see range of anwers and feedback
  7. Spreadsheets: as part of Google Form or by itself; helps us be consistent with both feedback and comments; easily mail merge feedback to learners
  8. Screenshot annotations: sometimes we need to show, not tell; I really like Awesome Screenshot because it plays well with Google Drive
  9. Screencasting: sometimes we need to show and tell; Screencastify is one of many options out there (free and works with Chromebooks)
  10. YouTube: with a webcam, we can easily video ourselves giving feedback and upload it immediately as a public or private video for sharing

 

I can't profess to be a model of a good 'feedbacker', but I do consider feedback on my feedback seriously and reflect on my own practices (even as I write this). Have you got other tips or strategies to share? What has worked and not worked for you?

 

Formative assessment

3 min read

What is assessment? While we often use “test” and “assessment” interchangeably, it’s important to differentiate the two. A test is an assessment, but an assessment isn’t necessarily a test. Tests are usually timed and result in marks or grades. Assessments can take many other forms, however.

Hill and McNamara (2012) talk about assessment opportunities, which they define as ‘any actions, interactions or artifacts... which have the potential to provide information on the qualities of a learner’s... performance’. It’s important to note that these can be unplanned, unconscious and embedded, and therefore can take place anytime in class, and these days, out of class as well.

Assessment opportunities are particularly useful for formative assessment. Black and Wiliam, who have written extensively on this topic, say that assessment is formative only if the evidence about student achievement obtained is actually used to make decisions about the next steps in instruction.



Formative assessment is often known as Assessment for Learning. The Assessment Reform Group came up with this diagram (above) to illustrate the importance of formative assessment. I think it shows the different dimensions of formative assessment very well. I particularly like the point about developing the capacity for self-assessment, which is critical to the development of self-directed learners. In their definition of AfL, the 3 aims are to find out where the learners are, where they need to go, and how best to get there.



Wiliam usefully unpacks formative assessment in the chart above, which shows us the respective roles of teacher, peer and learner in achieving the 3 aims I’ve just mentioned. As you can see, formative assessment, done right, ought to cultivate active and collaborative learners.

So what’s the difference between formative assessment and its opposite, summative assessment? In a nutshell, they have different functions and result in different things. Summative assessment is used to rank or certify, and for accountability purposes, while formative assessment is actually used to meet learner needs. Summative assessment typically ends with grades or marks, while formative assessment produces feedback for the learner instead.

Black and Wiliam have noted that when students are given both, they tend to ignore feedback and focus solely on their grades or marks. This is a habit that’s hard to break, and makes marks and grades doubly un-useful for learners.

What are some other reasons formative assessment is important? Black and Wiliam have reported significant learning gains as a result, noting that it helps low achievers in particular.

So often, however, teachers think of formative assessments as little tests that result in marks or grades, which don’t tell the teachers nor the students much about the learning that’s going on, or what to do next.

Formative assessment can be embedded into our class activities. Take a look at this page by the Northwest Evaluation Association for some ideas.

What formative assessment activities do you use? How do you and your students use them to inform teaching and learning? Please share with us on Twitter