Skip to main content

Educator, applied linguist, language tester.

hsiaoyun.net/

groups.diigo.com/group/assessment-literacy

Assessing literacies

3 min read

This week I'm writing about an assessment topic that's closer to my heart. Because my assessment class is doing the assessment of multiliteracies this week, I thought it's the right time to write a little overview.

First, what are multiliteracies? According to the New London Group (1996), we are ‘designers of meaning’. They identified 6 design elements in the meaning making process:

  1. Linguistic meaning
  2. Visual meaning
  3. Audio meaning
  4. Gestural meaning
  5. Spatial meaning
  6. Multimodal

In my own research, however, I am working more with related concepts:

New literacies (e.g. Lankshear & Knobel, 2006): new literacy practices, particularly those associated with ICT

New Literacy Studies (e.g. Street, 1984): a new, sociocultural perspective of literacies as socially situated practices rather than skills

21st Century skills/competencies/etc.: various models (see ‘Defining 21st C Skills’)

Digital literacies: many definitions, including 

  1. Attention literacy
  2. Crap detection (information literacy)
  3. Participation literacy
  4. Collaboration literacy
  5. Network smarts (building social capital and PLN)

Yes it's confusing (I'm still confused) and yes, there is considerable overlap in all this. 

So how are these literacies (broadly defined) assessed? There are transnational efforts:

There are also researchers like Kimber and Wyatt-Smith who focus on mostly multimodal literacy. 

Kalantzis and Cope (2011, pp. 81-82) name 6 core principles for assessing writing, and I think they are useful when thinking about assessing multiliteracies in general. They wrote that meaningful assessment should

  1. be situated in a knowledge-making practice
  2. draw explicitly on social cognition
  3. measure metacognition
  4. address multimodal texts
  5. be “for learning”, not just “of learning”
  6. be ubiquitous.

To me, the more promising holistic approaches include collaborative project work, ePortfolios and Learning Analytics (experimental). Too much of what I read is still preoccupied with tests; I think with tests you always end up with considerable construct underrepresentation. You can't possibly fully capture literacies with a test, even if it's multimodal and digital. Authenticity is critical, no matter how messy.

Promising assessment design approaches include indigenous assessment (assessment that relies on indigenous tasks and criteria; different social groups have their indigenous assessments for newcomers wishing to gain acceptance), social/peer assessment and self-assessment. A purely teacher or examiner led assessment isn't going to cut it.

Of course, one could argue that these alternative assessments are just as valid for assessing school-bound literacies. But while we're moving beyond traditional assessments, why not also redefine literacies and learning for school? There is no time like the present.

ATC21s week 4: Teaching collaborative problem solving

8 min read

This week is the final week of video lectures, so I'll end my reflective posts with today's too (I am not doing the assignment).

Video 4.0 discusses the differences between collaboration and cooperation. Collaboration is also not group work (though I don't see how group work can't sometimes end up being collaborative in their sense of the word?) Collaboration is actually defined very specifically here, to the extent that I see why it might be hard to find authentic problems that fit this model perfectly for the classroom.

4.1 discusses inter and intra individual differences among learners. Individuals develop at different rates different aspects of the CPS construct. As an example of how the teacher might cater to such a diverse group, a tertiary level CPS task is described. I'm not very clear about the objective of this task so I won't go into it here. It's highlighted that the same task can be scaffolded differently for different students depending on their cognitive and social development. But what if within the group, members are at different stages of development? How can we scaffold the task differently for each person if they are all working together on the same task? In 4.3 the researcher interviewed mentions teachers getting students to read up on different topics so that they each bring something different to the problem solving. I suppose something like this could be done, though it smacks of too much teacher engineering -- is this still collaboration and not cooperation?

4.2 is an interview with Griffin and Care. Care explains that the kind of CPS tasks used in their study only tap into the 'tip of the iceberg' as far as the CPS construct is concerned. This indicates that they do indeed believe that such puzzles (as I call them) measure the same construct as more complex (more authentic?) tasks. I remain rather dubious about this. Reading comprehension is mentioned here for comparison:

whether it's multiple choice or some open item questions and what we get from that assessment is some indication of the skill

But of course MCQ and open-ended items are likely to be differently valid, and this isn't trivial -- one may tap into more of the iceberg-construct than the other. Or tap into a different iceberg! My point is that construct validity cannot be assumed. It could be that ATC21s has research showing their CPS and real-life CPS are the same construct of course, and I've missed an article somewhere.

Griffin then says:

The issue that you spoke about with the reading test, we've managed over a hundred years or so, to become very good at that kind of assessment, and in interpreting that. And we know now that that one piece of text and the two or three questions that are associated with it, are only a sample of what we could do. So, we build more and more complicated texts. Yeah. And more difficult questions and we address higher and higher skills in the test. So over a 40 item test of a reading comprehension we go from very simple match the word to this picture through to judging an author's intention. Yeah. And so behind the multiple choice questions on a piece of paper there's also a lot of complex thinking that goes on and there's, behind that, there's the idea of a developmental progression, or a construct that we're mapping, but the teacher, the student never sees that until after the test has been developed, interpreted and reported.

Well now. This is such an oversimplification of what we know about assessing reading that it's at best a poorly chosen analogy.

It's then claimed that while the learners are enjoying their games, the researchers are actually assessing their CPS development. Putting aside the fact that teachers likely won't have access to such games (as I've already pointed out previously), I'm not sure about the suggestion here that this is a good example of assessing through games. It's been awhile since my gaming days but I would never consider these good games. I really think you need good games for game-based learning and the same goes for assessment.

a reading test is often read a passage, look at a question, choose an alternative out of four possible alternatives by pressing a button or ticking a box. What we have done enables a much more complex view of that. We can now tap into what's going on in the background behind the student's reading comprehension, what they're thinking while they're trying to work out what alternative they choose.

Hmm. I think they should just stop referring to reading assessment unless they've really got some novel reading assessment along the same lines as their CPS 'games' that somehow has never been disclosed to language testers.

And finally we come to what I see as the crux of it all:

You know, one of the challenges for us still is that, we don't know yet whether the skills that we're picking up will generalize to real life situations. That's one of the big issues. And in part we're, we're hampered and we're constrained, because of the nature of how you pick up the assessment data. You know, because, if we're talking about the sorts of problems to which we want to bring collaborative problem solving, like big problems, or say, global warming, The issue is that in the school context what you typically give to student is well defined problems. Problems that they've given a lot more scaffolding to work through, they're given structure to work through. If we go too far down that path, too much structure and too much scaffolding, they won't learn the particular underlying skills that we need that they can then generalize to take to the big problems. So there's some real issues in our assessments.

I don't know if I understand this correctly. Are they saying that their CPS tasks are well-defined because this is a constraint of school? And that this can be compensated for by providing less structure and scaffolding? IMO schools can definitely do CPS differently if they want to. And they can't do it ATC21s style anyway, with their kind of electronic games. It seems to me that the well-definedness of their games is a constraint of their research design, not of schools, thus that part about being constrained by 'the nature of how you pick up the assessment data'. But assessment data can be 'picked up' in different ways.

4.3 is an interview with a researcher who is working with teachers on implementing CPS in their schools. This is an interesting account that I think teachers on the course would want to know more about. At the start, the researchers says that the teachers had their students do the online CPS tasks so that they had a baseline to work with. Could all schools do this? What if they couldn't? In 4.4 we hear from the 2 schools involved in the study, and again while interesting, it would be even more interesting to hear from schools that implement this without the technological support of the research team.

4.4 is a recap, with some future directions. It's pointed out that teachers have to be effective at collaborating themselves if they want to teach it to their students, and I wholeheartedly agree. That said, if they are taught this in pre-service in the same tip-of-the-iceberg way, I'm not sure if they would be prepared for real-world collaboration in the school.

There are also some extra videos available I think just this week, under resources. One of the them is called 'Learning in digital networks', and it suggests that this sort of CPS task gives learners a start to their ICT literacy or learning in digital networks. I really don't know about this. Given the rich digitally networked environment kids live in (at least in developed countries), do they really need to start with something like this? Do we have to train them on a toy 'internet' before they know how to learn on the real one? Chances are many already are learning and collaborating on the real internet.

This highlights the difference between this course's orientation to 21st century competencies and mine. ATC21s takes a more cognitive, more skills-based, more measurement-centric approach that while contributing a great deal to our understanding of such competencies, may also be limited in usefulness in transforming learning in the clasroom. I like that the ATC21s team are clearly more interested in learning and development, but I think a more social practice approach (to competencies, to assessment) is better aligned with formative aims and better able to achieve them. This is probably my research bias talking, so I'll stop here.

I hope you've enjoyed my reflective posts on the ATC21s MOOC! Next week, something new.

ATC21s week 3: Assessing Collaborative Problem Solving skills

3 min read

I'll start with a note that I've been wondering how the developmental progression scales are exactly novel. Today it dawned on me that they are indeed similar if not identical to the kind of proficiency scales so common in language testing. I guess it's never occurred to me that there are any other ways to measure development and progress in learning. Anyway, for this reason, I wish there had been more of an emphasis on criterion referencing, since the use of such a scale can easily 'devolve' into norm referencing.

This week we learn about using the progression scales to observe students in CPS and find their ZPD -- like using a systematic observation scheme. We can use different colour highlighters with the printed observational framework to mark where the student is at each observation, to see if the student has progressed.

My thoughts at this point are similar to last week's. The previous two weeks now really do seem like an overly long preamble to this practical lesson. Was there a need to explain in so much detail how the scale was developed? And would scales developed using such computer-based puzzles work for other (maybe more authentic) kinds of CPS?

We are shown a 'case study' called Laughing Clowns, which is just the same sort of computer-based puzzle. We are supposed to watch the video of the task being performed and practise using the observational framework. We are told what skill examples to watch for. Okay, given that teachers won't have this sort of puzzle on this sort of software for the students to work on to develop CPS (and should they use them anyway?) I wonder how useful this practice is. I've done unstructured observations of students working collaboratively, and it's nothing so neat and clean with the polite text-based turntaking that this learner dyad demonstrated in the video. This difficulty is acknowledged in the video, but clearly these examples are meant to help us 'start easy'. (Whether we can progress in our own developmental progression scale as observers would be up to us, I suppose.) 

The second practice case study is a more difficult puzzle called Olive Oil. The third is called Balance, and the last is called Game of 20 (a mathematical game). Again, they are puzzles rather than problems per se -- not very realistic or interesting to me. It's ironic that the first hands-on week should be the most boring to me (but as I tweeted before, I am probably not the target audience).

Writing my response to this week's prompt made me think of heutagogy again. While the CPS progression scale is undoubtedly useful (models are always useful even if not accurate), assessing CPS this way (give students a puzzle, observe them solve it, mark progression on a scale) is not very heutagogical or 21st century, IMO. How far away is this from the old-hat performance assessment that's pretty common in language assessment? How much of this is 'old wine in new bottles'? Where is the peer and self assessment, space for out-of-class 'informal' learning, authenticity and real-world relevance? I'm not saying that this model leaves no room for all that. But I think they should be central to assessment in the digital age, not nice-to-have add-ons. Backwash is a result not just of what we assess, but how we assess, too.

ATC21s week 2: A closer look at 21st century skills: collaborative problem solving

7 min read

This week I'm somewhat distracted by an upcoming trip to Bangkok to present at the 2nd Annual Asian Association for Language Assessment Conference. This is the first time I am formally presenting on my study, so I'm quite nervous! Fortunately I was able to squeeze in some time for week 2 of .

Here's a quick summary of this week's lesson:

1. What is collaborative problem solving (CPS)? There are existing problem solving models (cited are Polya, 1973, and PISA, 2003/2012), but they do not include the collaborative component. Therefore ATC21S has come up with their own:

  • Collect and share information about the collaborator and the task
  • Check links and relationships, organise and categorize information
  • Rule use: set up procedures and strategies to solve the problem using an “If, then..” process
  • Test hypotheses using a “what if” process and check process and solutions

The CPS construct is made up of social skills and cognitive skills.

2. Social skills are participation, perspective taking and social regulation skills. These can be further unpacked:

  • Participation: action, interaction and task completion
  • Perspective taking: responsiveness and audience awareness
  • Social regulation: Metamemory (own knowledge, strengths and weaknesses), transactive memory (those of partners), negotiation and responsibility initiative

There are behavioural indicators associated with each of these elements. (At this point, I was pretty sure that Care and Griffin don't mean to suggest that teachers conduct Rasch analysis themselves, but rather use already developed developmental progressions.)

3. Cognitive skills are task regulation, and knowledge building and learning skills:

  • Task regulation: problem analysis, goal-setting, resource management, flexibility and ambiguity management skills, collects information, and systematicity
  • Knowledge building and learning: relationships, contingencies and hypothesis Again, each element has associated indicators.

4. We come back to the developmental approach that integrates the work of Rasch, Glaser and Vygotsky. Teachers need a framework that they can use to judge where their students are in their CPS development. There are existing ones (such as the ubiquitous Bloom's), but none are suited to measuring CPS skills. So what we need is a new empirically derived framework that allows teachers to observe students in CPS action and judge where they are.

5. Empirical progressions are explained, and examples such as PISA and TIMMS given. We are then presented with the progression that ATC21S has developed for CPS. The table is too large to reproduce here, but essentially it expands all the elements in 3 and 4 into progressions so that you end up with five scales. 

 

Impressive right? Except I'm not quite sure about the tasks they used to developed this. The example they showed was of two students connected by the internet and chatting by typing, attempting to solve what appears to be more of a puzzle than a problem. That is, the sort of problem teachers cook up to test students' intellectual ability (shades of ?) The 2nd volume of the book series actually has a chapter that discusses this in more detail and seems to confirm that they used puzzles of this sort. I understand of course that doing it in this way makes it easier to collect the sort of data they wanted. But given that the tasks aren't very authentic, to what extent are they representative of the target domain? Are there issues of construct validity? I will need to read further, if there is available literature, before I make up my mind. It would be interesting, if not already done, to conduct a qualitative study using more authentic problems, more students per team, observation, artefact collection, (retrospective) interviews, and so on. You won't get the quantity of data as with the study but this sort of rich data could help us check the validity of the framework. It could also be of more practical value to teachers who actually have to teach and assess this without fancy software and a team of assistants.

I won't deny that I'm rather disappointed that Rasch measurement is really 'behind the scenes' here, though I'm not surprised. I can't help but wonder if it's really necessary to make Rasch appear so central in this course, especially since some of my classmates seem to misunderstand its nature. This is not surprising -- Rasch is not the sort of thing you can 'touch and go' with. There is some confusion about criterion referencing too (IMO it's hard to make sense of it without comparing it to norm referencing and explaining how they are used in assessment usually). ZPD is faring a little better, probably since it's familiar to most teachers. I am however surprised to see it occasionally referred to rather off-handedly, as if it's something that's easy to identify.

Would it make more sense to focus more on the practicalities of using an established developmental progression? It's too early to say I guess, but already quite a few of my classmates are questioning the practicality of monitoring the progress of large classes. This is where everyday ICT-enabled assessment strategies can come into play. I also hope to see more on how to make assessments really formative. I learnt from the quiz this week (if it was mentioned elsewhere I must have missed it) that assessments that are designed to measure developmental progression are meant to be both formative and summative. Okay, great, but IMO it's all too easy to miss the formative part completely without even realising it -- remember that an assessment is only formative if there's a feedback loop. The distinction between the two uses cannot be taken lightly, and there really is no point harping on development and ZPD and learning if we ignore how assessment actually works to make progress happen.

Which brings me to the assessment on this course. If you're happy with the quizzes so far you might want to stop reading here.

 

Diligent classmates may have noticed from my posts that I REALLY do not like the quizzes. Initially it was the first so-called self assessment that I took issue with. Briefly, its design made it unfit for purpose, at least as far as I'm concerned. After doing another 'self-assessment' for week 2 and the actual week 2 quiz, I'm ever more convinced that the basic MCQ model is terrible for assessing something so complex. It's quite ironic that a course on teaching and assessing 21C skills should utilise assessments that are assuredly not 21C. Putting what could be a paper MCQ quiz online is classic 'old wine in new bottle', and really we cannot assess 21C skills with 19C or 20C ways. I have written (to explain my own study) that:

... digital literacies cannot be adequately assessed if the assessment does not reflect the nature of learning in the digital age. An assessment that fails to fully capture the complexity of a construct runs the risk of construct under-representation; that is, being ‘too narrow and [failing] to include important dimensions or facets of focal constructs’ (Messick, 1996, p. 244).

Surely we cannot claim that the understanding of assessing and learning 21C skills is any less complex than 21C skills themselves? Of my initial findings, I wrote that:

We may be able to draw the conclusion that assessing digital literacies are 21st century literacies twice over, in that both digital literacies and the assessment thereof are new practices that share similar if not identical constituents.

Telling me that the platform can't do it differently is an unsatisfactory answer that frankly underlines the un-21C approach taken by this course. 21C educators don't allow themselves to be locked in by platforms. It seems that the course designers have missed out on a great opportunity to model 21C assessment for us. I'm not saying that it would be easy, mind you. But is it really possible that the same minds who developed an online test of CPS can't create better than the very average xMOOC?

Okay, I should stop here before this becomes an awful rant that makes me the worst student I never had. I am learning, really, even if sometimes the learning isn't what's in the LOs. And I will continue to persevere and maybe even to post my contrary posts despite the threat of being downvoted by annoyed classmates :P

ATC21s week 1: Defining 21st Century Skills

6 min read

I've been wondering what to write next, and in the end decided to change things up a bit. I am inspired by the second run of Assessment and Teaching of 21st Century Skills MOOC which started yesterday. I'd strongly encourage anyone interested in this topic to join us! I actually registered for the first run last year but couldn't find the time to do any of the work. This time I'm more determined!

So for these five to six weeks I'm going to blog a weekly informal reflection on the course. It isn't a cMOOC, unfortunately, so I don't know how many people will be blogging along, but I'm going to do it anyway (and tweet too). I plan to write about my chief takeaways for the week, their implications for my own research interests, and any questions that occur to me.  

The theme for Week 1 is Defining 21st Century Skills. I am immediately engaged, since anyone who has to write about this topic struggles to define it! Here are the learning objectives, as they are called here:

  • Understand the influence of technology on the workplace, and the implications for schools
  • Understand what is meant by '21st century skills'
  • Be familiar with a range of approaches to defining 21st century skills
  • Be familiar with 21st century skills frameworks
  • Understand what is meant by a developmental approach to assessment and learning. 

(Interesting that Bloom's or similar is not a must here!)

 

We are introduced to a number of frameworks, starting from the ATC21s one, since the course is run by Esther Care and Patrick Griffin from the ATC21s team. They have developed the KSAVE (knowledge, skills, attitudes, values and ethics) model:

Ways of Thinking

1. Creativity and innovation

2. Critical thinking, problem solving, decision making

3. Learning to learn, Metacognition

Ways of Working

4. Communication

5. Collaboration (teamwork)

Tools for Working

6. Information literacy

7. ICT literacy

Living in the World

8. Citizenship – local and global

9. Life and career

10. Personal and social responsibility – including cultural awareness and competence

 

Here are the other frameworks introduced:

UNESCO 

  • Learning to know
  • Learning to do
  • Learning to be
  • Learning to live together 

 

OECD (3 overlapping circles) 

  • Use language, symbols and texts interactively
  • Interact in heterogenous groups
  • Act autonomously

 

P21 Partnership for 21st Century Learning  

P21

 

European Commission Recommendation on key competences for lifelong learning

  1. Communication in the mother tongue;
  2. Communication in foreign languages;
  3. Mathematical competence and basic competences in science and technology;
  4. Digital competence;
  5. Learning to learn;
  6. Social and civic competences;
  7. Sense of initiative and entrepreneurship; and
  8. Cultural awareness and expression. 

 

And even though Singapore is a ATC21s member, MOE's 21st Century Competencies framework is not mentioned. Perhaps it's assumed to be aligned with the ATC21s framework. I include it here anyway for the sake of comparison.

21CC

 

 

Care and Griffin are clear that no framework can be a 'one size fits all', and so it isn't so much a case of competing frameworks but that different contexts have different needs. That said, I feel more attracted to KSAVE for reasons I can't really articulate now. I also note that UNESCO's framework is the only one here that doesn't refer to technology in some way, even obliquely. I'm not sure why that's so. Which framework makes most sense to you?

 

The other major takeaway is ATC21s's framework for what I think is essentially formative assessment. From the initial self-assessment quiz, which was supposed to tell me how much I already know and don't know (but was really too vague to do that), I gather that this framework is the crux of the course, which they will illustrate in the coming weeks using the example of Collaborative Problem Solving. I was surprised at this point to find my familiar friends Zone of Proximal Development (Vygotsky), criterion referenced assessment (Glaser -- though I shamefully have never cited him when I wrote about criterion referencing) and Rasch measurement. 

@sallyngsh might remember joining me for a talk by an NIE colleague on Rasch and ZPD. At that time I felt that the speaker wasn't really claiming that one could locate the ZPD using Rasch. But I think that this is precisely what Care and Griffin are claiming. Very briefly, the Rasch variable map lines up item difficulty and person ability along the same scale, and the developmental levels we infer from a criterion referenced scale can be lined up against this as well. So at the bottom we have low difficulty, low ability and the lowest level of development/competence. At the top we have high difficulty, high ability and the highest level of development/competence. A test-taker and an item on the same level of the scale indicates that that test-taker has about 50% probability of getting the item correct. The idea, if I understand it correctly, is that a teacher can look at this map and say these students are at this level, so I need to work with them on these items and items that are one level up (or similar), because this is the ZPD. 

Which is all fine and really quite brilliant. Except that the MOOC doesn't at this point address what I think many people familiar with Rasch measurement know: it's an obscure theory in an obscure field of study (among educators anyway), and seemingly difficult to grasp, even for people with a working knowledge of assessment theory and statistics. And to be honest, I don't know if that many teachers have such a working knowledge; many are statistics-phobic which would be a huge barrier here.    

The Self-directed Learning Oriented Assessment (SLOA) project in Hong Kong has actually introduced Rasch measurement to school teachers for use in formative assessment. The teachers were trained to use the program Winsteps; while they found using it challenging, they nevertheless were able to appreciate its benefits. Unfortunately, I don't think Rasch has become more widely known or practised subsequently. I've wondered a few times if I could possibly run introductory courses for teachers, but I'm not a university funded research programme, so this could be too ambitious, with zero demand locally.

I know now that one important question I'd like to answer by the end of this MOOC is: how can ordinary teachers get the hang of Rasch and use this framework in their classrooms, given that the investment of time and energy to do this is considerable, and their motivation and/or confidence low? A second: if ATC21s has a solution, can I play a part to make this an emerging assessment practice among Singapore teachers?

Looking ahead to the learning objectives in the coming weeks, I rather suspect that this MOOC will not offer a solution, and it might be unrealistic to expect it to anyway. But I would surely appreciate some clues and inspiration.

 

Feedback

5 min read

It's Tuesday as I write this, and as I happen to be doing a workshop on feedback tomorrow, I thought I'd be lazy and share some of the key content as my Wednesday post on assessment. I've organised my session around the three categories of Why? - How? - What? (inspired by Shove, Pantzar and Watson's SPT (social practice theory) framework), before we give it a try as a class. The aim is to give effective feedback as efficiently as possible; as we all know, it's tiring and time-consuming work, and sometimes it feels like our efforts just disappear into a black hole!

 

Why feedback?

Feedback is integral to formative assessment, which, as we already know from Black & Wiliam, can result in significant learning gains, helps low achievers in particular, and can cultivate active and collaborative learners. It therefore supports self-directed learning and 21st century competencies.

 

How can we give effective feedback?

Here's a great image based on this article.

5 research-based tips for providing students with meaningful feedback

This work by rebe_zuniga is licensed under a Creative Commons Attribution 2.0 Generic Licence.

More tips I've gathered from various articles (including some tweeted by Dr Carless):

  1. Build trust: make learners feel safe to fail, so that they take risks, and allow us to see what help and feedback is needed
  2. Promote a growth mindset: as per Carol Dweck -- as Dylan Wiliam says 'smart is not something you are, smart is something you get'
  3. Develop a dialogue: instead of writing mini-essays learners might never read in earnest, engage our learners in a dialogue
  4. Forget the sandwich: the feedback sandwich can seem condescending or manipulative; be honest and constructive instead
  5. Focus on task, not ego: we don't need the sandwich to protect the learner's fragile ego if we focus on the task rather than the person
  6. Eliminate grades/marks: or delay releasing them if we can't -- research shows learners tend to ignore feedback if both are given
  7. Assess one criterion per task: we risk overwhelming the learner if we try to assess everything at once -- focus on one thing at a time, and let the learner know in advance so that they know where to direct their efforts
  8. Feed it forward: what next? how can the learner apply this feedback in future work?
  9. Make it actionable: can it be applied? or is it beyond the ability of the learner?
  10. Work less than the learner: resist correcting everything for the learner -- we want to encourage them to take responsibility and ownership, and to develop self-directed learning capabilities
  11. Cultivate feedback literacy: why is feedback important, and how do we use feedback to improve what we do?
  12. Activate peers: peer feedback can be more effective than ours, and learners learn twice when they give feedback, helping them internalise the qualities of a good performance and self-assess
  13. Share range of feedback: learners improve their awareness when they see what others have done well or poorly
  14. Incorporate regular reflection: reflection helps learners develop themselves as self-assessors and self-directed learners, and helps us understand better the kind of feedback our learners are in need of

 

What can we use?

I've thought of 10 tools but maybe you have more to suggest.

  1. Analytic rubrics/scoring: this is usually in the form of a grid, and breaks performance down into criteria
  2. Marking symbols: commonly used in assessing writing (e.g. SP = spelling error) 
  3. Master list of comments: keeping a list of frequent comments that we can 'recycle' by copying and pasting; this can include links to resources such as YouTube content
  4. Google Drive: the Swiss Army knife of digital feedback tools; easily build a feedback dialogue -- check out Doctopus which turbocharges what is already a powerful tool
  5. Voice recordings: can result in better uptake; easy on Google Docs with Kaizena (not so easy on Word)
  6. Google Forms: great for eyeballing answers collated onto a spreadsheet and quick individual comments as feedback; allows learners to see range of anwers and feedback
  7. Spreadsheets: as part of Google Form or by itself; helps us be consistent with both feedback and comments; easily mail merge feedback to learners
  8. Screenshot annotations: sometimes we need to show, not tell; I really like Awesome Screenshot because it plays well with Google Drive
  9. Screencasting: sometimes we need to show and tell; Screencastify is one of many options out there (free and works with Chromebooks)
  10. YouTube: with a webcam, we can easily video ourselves giving feedback and upload it immediately as a public or private video for sharing

 

I can't profess to be a model of a good 'feedbacker', but I do consider feedback on my feedback seriously and reflect on my own practices (even as I write this). Have you got other tips or strategies to share? What has worked and not worked for you?

 

Alphabet soup: AfL, AaL, LOA

2 min read

Last week, my post on formative assessment (and a subsequent tweet asking for suggestions) sparked a short conversation on Twitter with @ashley about Assessment for Learning and Assessment as Learning, as well as Learning Oriented Assessment. I'm still looking for suggestions for this blog (let me know!); in the meantime, here's my attempt at sorting out these concepts.

Assessment for Learning (AfL) is for all intents and purposes formative assessment. It's useful here to revisit Dylan Wiliam @dylanwiliam's table:


Assessment as Learning was originally proposed by Lorna Earl @lmearl. While often differentiated from AfL, if we accept Wiliam's definition of AfL, AaL is more accurately a subset of AfL:


Learning Oriented Assessment is the 'new' kid on the assessment block:

Figure from
Carless (2007)

Originally proposed by David Carless @carlessdavid and his colleagues, the concept should ring a bell for those of you who are familiar with the backward design approach to curriculum. This approach includes Understanding by Design (Wiggins @grantwiggins & McTighe @jaymctighe), popular in K-12:


(taken from here; original source unknown)

And also Biggs's Constructive Alignment (well-known in HE):


Diagram by UCD Teaching & Learning

I see LOA as a model that not only employs backward design, but does it in a way that foregrounds formative assessment (including AaL). It also deemphasises the distinction between summative and formative assessment in a way that might actually be constructive -- the key is to make summative assessment perform a learning-oriented service, in addition to institutional purposes. I say constructive because seeing the two assessments as a dichotomy (mutually exclusive) could put teachers and learners in a bind -- we can't do away with summative assessments because of institutional demands, and positioning them as the 'bad guys' doesn't necessarily eliminate washback. IMO, the distinction between formative and summative is still important, but the gap can be narrowed, and an assessment could be thoughtfully designed to serve both purposes, perhaps especially if it is an 'alternative' assessment rather than a traditional timed test. By aligning all assessments with the LOs, we can ideally ensure that both kinds -- summative and formative -- are pulling stakeholders in the same direction rather than opposing ones, and promote positive washback.

I've really only just started thinking about these concepts (and what they mean in relation to my own research), so any thoughts you might have on this are very welcome :)

Formative assessment

3 min read

What is assessment? While we often use “test” and “assessment” interchangeably, it’s important to differentiate the two. A test is an assessment, but an assessment isn’t necessarily a test. Tests are usually timed and result in marks or grades. Assessments can take many other forms, however.

Hill and McNamara (2012) talk about assessment opportunities, which they define as ‘any actions, interactions or artifacts... which have the potential to provide information on the qualities of a learner’s... performance’. It’s important to note that these can be unplanned, unconscious and embedded, and therefore can take place anytime in class, and these days, out of class as well.

Assessment opportunities are particularly useful for formative assessment. Black and Wiliam, who have written extensively on this topic, say that assessment is formative only if the evidence about student achievement obtained is actually used to make decisions about the next steps in instruction.



Formative assessment is often known as Assessment for Learning. The Assessment Reform Group came up with this diagram (above) to illustrate the importance of formative assessment. I think it shows the different dimensions of formative assessment very well. I particularly like the point about developing the capacity for self-assessment, which is critical to the development of self-directed learners. In their definition of AfL, the 3 aims are to find out where the learners are, where they need to go, and how best to get there.



Wiliam usefully unpacks formative assessment in the chart above, which shows us the respective roles of teacher, peer and learner in achieving the 3 aims I’ve just mentioned. As you can see, formative assessment, done right, ought to cultivate active and collaborative learners.

So what’s the difference between formative assessment and its opposite, summative assessment? In a nutshell, they have different functions and result in different things. Summative assessment is used to rank or certify, and for accountability purposes, while formative assessment is actually used to meet learner needs. Summative assessment typically ends with grades or marks, while formative assessment produces feedback for the learner instead.

Black and Wiliam have noted that when students are given both, they tend to ignore feedback and focus solely on their grades or marks. This is a habit that’s hard to break, and makes marks and grades doubly un-useful for learners.

What are some other reasons formative assessment is important? Black and Wiliam have reported significant learning gains as a result, noting that it helps low achievers in particular.

So often, however, teachers think of formative assessments as little tests that result in marks or grades, which don’t tell the teachers nor the students much about the learning that’s going on, or what to do next.

Formative assessment can be embedded into our class activities. Take a look at this page by the Northwest Evaluation Association for some ideas.

What formative assessment activities do you use? How do you and your students use them to inform teaching and learning? Please share with us on Twitter

Designing tests

1 min read

I'm cheating a bit this week by posting a set of slides adapted from the one I used for my class. (This cycle will be a bit different if designing alternative and/or formative assessments.)

Washback

2 min read

Even if you are not familiar with the term, you are probably familiar with the concept of washback (commonly called backwash in educational assessment). It refers to the effects of assessment on teaching and learning, and anyone who's studied in an exam-oriented system would have experienced this.

We tend to think poorly of washback because we often think of negative washback, e.g. ignoring what's in the syllabus in favour of what will be in the exam, even if we think that the syllabus has more worthy learning outcomes. While washback can be very problematic, I think we do need to consider two things.

First, as long as high-stakes exams determine a person's educational prospects, it's pretty unfair to blame teachers (and parents and learners) for their preoccupation with preparing students for exams. I don't mean to say that teachers etc. should willingly let exams lead them by the nose, and applaud those who can look beyond exams to think and act with true education in mind. However, we would be doing our students a disservice if we didn't prepare them adequately for exams (think face validity and student-related reliability). The point is not to obsess over them and let them overrun the curriculum.

Second, washback can be positive, and we should try to leverage this. While national exams are not within our control (though we may be able to exert some subtle influence), classroom assessments are -- make sure these are aligned with our intended learning outcomes. I believe that real learning will serve students well in their exams, and that obsessive exam prepping is unnecessary.

How do you deal with washback? Let us know on Twitter with