Skip to main content

Educator, applied linguist, language tester.

hsiaoyun.net/

groups.diigo.com/group/assessment-literacy

Assessing literacies

3 min read

This week I'm writing about an assessment topic that's closer to my heart. Because my assessment class is doing the assessment of multiliteracies this week, I thought it's the right time to write a little overview.

First, what are multiliteracies? According to the New London Group (1996), we are ‘designers of meaning’. They identified 6 design elements in the meaning making process:

  1. Linguistic meaning
  2. Visual meaning
  3. Audio meaning
  4. Gestural meaning
  5. Spatial meaning
  6. Multimodal

In my own research, however, I am working more with related concepts:

New literacies (e.g. Lankshear & Knobel, 2006): new literacy practices, particularly those associated with ICT

New Literacy Studies (e.g. Street, 1984): a new, sociocultural perspective of literacies as socially situated practices rather than skills

21st Century skills/competencies/etc.: various models (see ‘Defining 21st C Skills’)

Digital literacies: many definitions, including 

  1. Attention literacy
  2. Crap detection (information literacy)
  3. Participation literacy
  4. Collaboration literacy
  5. Network smarts (building social capital and PLN)

Yes it's confusing (I'm still confused) and yes, there is considerable overlap in all this. 

So how are these literacies (broadly defined) assessed? There are transnational efforts:

There are also researchers like Kimber and Wyatt-Smith who focus on mostly multimodal literacy. 

Kalantzis and Cope (2011, pp. 81-82) name 6 core principles for assessing writing, and I think they are useful when thinking about assessing multiliteracies in general. They wrote that meaningful assessment should

  1. be situated in a knowledge-making practice
  2. draw explicitly on social cognition
  3. measure metacognition
  4. address multimodal texts
  5. be “for learning”, not just “of learning”
  6. be ubiquitous.

To me, the more promising holistic approaches include collaborative project work, ePortfolios and Learning Analytics (experimental). Too much of what I read is still preoccupied with tests; I think with tests you always end up with considerable construct underrepresentation. You can't possibly fully capture literacies with a test, even if it's multimodal and digital. Authenticity is critical, no matter how messy.

Promising assessment design approaches include indigenous assessment (assessment that relies on indigenous tasks and criteria; different social groups have their indigenous assessments for newcomers wishing to gain acceptance), social/peer assessment and self-assessment. A purely teacher or examiner led assessment isn't going to cut it.

Of course, one could argue that these alternative assessments are just as valid for assessing school-bound literacies. But while we're moving beyond traditional assessments, why not also redefine literacies and learning for school? There is no time like the present.

#rhizo15 week 3: The 'myth' of content

3 min read

I don't feel nearly as inspired by this week's topic as I did by last week's but I'll try anyway!

First, a warning that I don't have any interesting anti-content or un-content things to say. I saw this question to be about prescription, about top down curriculum development. There's been a lot of talk about negotiated curricula which would be a very learner-centred thing to do but requires a lot of the teacher -- breadth if not depth of knowledge, a well-developed capacity for self-directed learning, adaptability, willingness to admit that she doesn't know everything, identification as a learner herself -- the 21st century teacher?

So I'm circling back to heutagogy, which is characterised by 

  • Recognition of the emergent nature of learning and hence the need for a ‘living’ curriculum that is flexible and open to change as the learner learns;
  • the involvement of the learner in this ‘living’ curriculum as the key driver

The elements of a heutagogical approach are:

  • Learner-defined learning contracts
  • Flexible curriculum
  • Learner-directed questions
  • Flexible and negotiated assessment
  • Reflective practice
  • Collaborative learning  

Can we learn together but with each of us following our own curriculum? Could we design our own assessments? What assessment literacy/literacies are needed? What demands does this place on the teacher for their support?

An emerging theme in my own research is the completely obvious observation that the assessment of digital literacies requires digitally literate teachers. Similarly obvious: heutagogical learning requires heutagogical teachers (or facilitators if one prefers). And it's probably my own preoccupation talking, but it looks like assessment literacy is a big (missing) piece of the puzzle here. Can we expect learners to design their own assessments if they lack assessment literacy? Can we expect teachers to guide learners if they lack the same?

I've mentioned that I find 's conceptualisation of collaborative problem solving problematic. This CPS construct is the basis of their CPS developmental progression scale, which teachers can use to observe their students in the process of CPS, in order to assess their CPS development. I'll say more about this in my post tomorrow. Here I just want to make the point that this approach to assessing 21st century skills is not heutagogical. Even when not 'curriculum-based' as claimed, the curriculum or content here is in fact embedded in the assessment, and this is still top-down, because the learners play no part in developing it.

I am not making any claims to teaching heutagogically here. I'm still experimenting with ways of doing that within the constraints of institutions and the limited time I have with any one student (because university courses are so short). But it's certainly something to think about: how can I help my students understand assessment in the digital age by designing ours together? This semester my language assessment students and I were able to co-construct our assessment rubric and negotiate deadlines. Next semester (if I get to teach this again), could the students play a bigger role in designing the assessments and therefore determining the content?

#rhizo15 week 2: Learning is uncountable, so what do we count?

4 min read

This isn't one of my scheduled posts for thematic tweets, and has nothing to do with as such. It's a little something for me to get my feet wet with . I've been hesitant to get started with because I doubted my ability to contribute something. Given my issues with the much easier , though, I thought I should try harder with , and balance my first real xMOOC experience with a cMOOC one.

As I type this, week 3 has already started, but I'll post my week 2 contribution anyway -- it was hard enough to come up with! Here's Dave's week 2 prompt. You'll note that it's conveniently right up my assessment alley. I don't know if I can respond to week 3's the same way!

Warning: my response is a rough, incomplete thing but maybe this is par for the course for rhizo learning. (I should confess here that I am ambivalent about rhizomatic learning as a theory, and hope that this experience helps to sort out my ideas about it.)

Okay. So we can't count learning. But I've always accepted this. Psychometricians working with Item Response Theory talk about latent traits: 'latent is used to emphasize that discrete item responses are taken to be observable manifestations of hypothesized traits, constructs, or attributes, not directly observed, but which must be inferred from the manifest responses' (Wikipedia). 

So when we assess, we are not measuring actual traits (or abilities) but the quality of evidence of such. It's all inferred and indirect, so we can't measure learning in the sense of holding a ruler up to it ('let's look at how much you've grown!').

Also learning happens continuously -- we can't start and stop at will. We can't measure it, even indirectly, as you might temperature, in real time. By the time the test finishes and is marked or feedback given, learning has already moved on.

So we never measure learning per se. As Lisa says, it's only symbolic. It's just a useful fiction.

But perhaps Dave's question is not about measuring quality of such tangible evidence? At least the conventional kind?

If it isn't about product, is it about process, as some teachers already do assess?

Are we talking about measuring 21st century 'skills' like CPS (see previous post)? has very cleverly broken down CPS into more easily measurable bits, but when a construct is broken down like that, its integrity tends to suffer (something I forgot to include in my previous post). Is it about measuring literacies (situated social practices), as I'm attempting to tackle in my study? Learning dispositions?

But tangible evidence is also required to 'see' all the above. Are we talking of true 'unmeasurables', if psychometricians admit to any? What might they be?

Maybe it's about assessment that isn't externally imposed -- self assessment? How do we activate learners as owners of their own learning, as per Wiliam's framework of formative assessment? How do we make reflective learning second nature?

How can we give self assessment currency, given stakeholders' obsession with reliability of measurement and 'fairness'? How can we give it validity? And have people understand and accept that validity?

Which leads to heutagogy. We have to be good at it to cultivate it in others; our education ministry says teachers should cultivate Self-directed Learning capabilities in our learners, but how do they cultivate it in themselves? How can we be self directed about SDL?

How about we throw out quantitative measures? No counting! Maybe that's how we throw out the comparing and ranking of norm-referenced assessment that people tend to default to (I'm not sure how many participants really got criterion-referencing.)

How about we become ethnographers of learning? Help learners become autoethnographers of their own learning? The kind that's mostly, if not 100%, qualitative. (Before you say that the average teacher has too much to do, recall that she has an entire class of potential research assistants.) I'm sure this is (as usual) not an original idea. Do you know anyone who's tried it?

'Not everything that can be counted counts, and not everything that counts can be counted.' - William Bruce Cameron

ATC21s week 2: A closer look at 21st century skills: collaborative problem solving

7 min read

This week I'm somewhat distracted by an upcoming trip to Bangkok to present at the 2nd Annual Asian Association for Language Assessment Conference. This is the first time I am formally presenting on my study, so I'm quite nervous! Fortunately I was able to squeeze in some time for week 2 of .

Here's a quick summary of this week's lesson:

1. What is collaborative problem solving (CPS)? There are existing problem solving models (cited are Polya, 1973, and PISA, 2003/2012), but they do not include the collaborative component. Therefore ATC21S has come up with their own:

  • Collect and share information about the collaborator and the task
  • Check links and relationships, organise and categorize information
  • Rule use: set up procedures and strategies to solve the problem using an “If, then..” process
  • Test hypotheses using a “what if” process and check process and solutions

The CPS construct is made up of social skills and cognitive skills.

2. Social skills are participation, perspective taking and social regulation skills. These can be further unpacked:

  • Participation: action, interaction and task completion
  • Perspective taking: responsiveness and audience awareness
  • Social regulation: Metamemory (own knowledge, strengths and weaknesses), transactive memory (those of partners), negotiation and responsibility initiative

There are behavioural indicators associated with each of these elements. (At this point, I was pretty sure that Care and Griffin don't mean to suggest that teachers conduct Rasch analysis themselves, but rather use already developed developmental progressions.)

3. Cognitive skills are task regulation, and knowledge building and learning skills:

  • Task regulation: problem analysis, goal-setting, resource management, flexibility and ambiguity management skills, collects information, and systematicity
  • Knowledge building and learning: relationships, contingencies and hypothesis Again, each element has associated indicators.

4. We come back to the developmental approach that integrates the work of Rasch, Glaser and Vygotsky. Teachers need a framework that they can use to judge where their students are in their CPS development. There are existing ones (such as the ubiquitous Bloom's), but none are suited to measuring CPS skills. So what we need is a new empirically derived framework that allows teachers to observe students in CPS action and judge where they are.

5. Empirical progressions are explained, and examples such as PISA and TIMMS given. We are then presented with the progression that ATC21S has developed for CPS. The table is too large to reproduce here, but essentially it expands all the elements in 3 and 4 into progressions so that you end up with five scales. 

 

Impressive right? Except I'm not quite sure about the tasks they used to developed this. The example they showed was of two students connected by the internet and chatting by typing, attempting to solve what appears to be more of a puzzle than a problem. That is, the sort of problem teachers cook up to test students' intellectual ability (shades of ?) The 2nd volume of the book series actually has a chapter that discusses this in more detail and seems to confirm that they used puzzles of this sort. I understand of course that doing it in this way makes it easier to collect the sort of data they wanted. But given that the tasks aren't very authentic, to what extent are they representative of the target domain? Are there issues of construct validity? I will need to read further, if there is available literature, before I make up my mind. It would be interesting, if not already done, to conduct a qualitative study using more authentic problems, more students per team, observation, artefact collection, (retrospective) interviews, and so on. You won't get the quantity of data as with the study but this sort of rich data could help us check the validity of the framework. It could also be of more practical value to teachers who actually have to teach and assess this without fancy software and a team of assistants.

I won't deny that I'm rather disappointed that Rasch measurement is really 'behind the scenes' here, though I'm not surprised. I can't help but wonder if it's really necessary to make Rasch appear so central in this course, especially since some of my classmates seem to misunderstand its nature. This is not surprising -- Rasch is not the sort of thing you can 'touch and go' with. There is some confusion about criterion referencing too (IMO it's hard to make sense of it without comparing it to norm referencing and explaining how they are used in assessment usually). ZPD is faring a little better, probably since it's familiar to most teachers. I am however surprised to see it occasionally referred to rather off-handedly, as if it's something that's easy to identify.

Would it make more sense to focus more on the practicalities of using an established developmental progression? It's too early to say I guess, but already quite a few of my classmates are questioning the practicality of monitoring the progress of large classes. This is where everyday ICT-enabled assessment strategies can come into play. I also hope to see more on how to make assessments really formative. I learnt from the quiz this week (if it was mentioned elsewhere I must have missed it) that assessments that are designed to measure developmental progression are meant to be both formative and summative. Okay, great, but IMO it's all too easy to miss the formative part completely without even realising it -- remember that an assessment is only formative if there's a feedback loop. The distinction between the two uses cannot be taken lightly, and there really is no point harping on development and ZPD and learning if we ignore how assessment actually works to make progress happen.

Which brings me to the assessment on this course. If you're happy with the quizzes so far you might want to stop reading here.

 

Diligent classmates may have noticed from my posts that I REALLY do not like the quizzes. Initially it was the first so-called self assessment that I took issue with. Briefly, its design made it unfit for purpose, at least as far as I'm concerned. After doing another 'self-assessment' for week 2 and the actual week 2 quiz, I'm ever more convinced that the basic MCQ model is terrible for assessing something so complex. It's quite ironic that a course on teaching and assessing 21C skills should utilise assessments that are assuredly not 21C. Putting what could be a paper MCQ quiz online is classic 'old wine in new bottle', and really we cannot assess 21C skills with 19C or 20C ways. I have written (to explain my own study) that:

... digital literacies cannot be adequately assessed if the assessment does not reflect the nature of learning in the digital age. An assessment that fails to fully capture the complexity of a construct runs the risk of construct under-representation; that is, being ‘too narrow and [failing] to include important dimensions or facets of focal constructs’ (Messick, 1996, p. 244).

Surely we cannot claim that the understanding of assessing and learning 21C skills is any less complex than 21C skills themselves? Of my initial findings, I wrote that:

We may be able to draw the conclusion that assessing digital literacies are 21st century literacies twice over, in that both digital literacies and the assessment thereof are new practices that share similar if not identical constituents.

Telling me that the platform can't do it differently is an unsatisfactory answer that frankly underlines the un-21C approach taken by this course. 21C educators don't allow themselves to be locked in by platforms. It seems that the course designers have missed out on a great opportunity to model 21C assessment for us. I'm not saying that it would be easy, mind you. But is it really possible that the same minds who developed an online test of CPS can't create better than the very average xMOOC?

Okay, I should stop here before this becomes an awful rant that makes me the worst student I never had. I am learning, really, even if sometimes the learning isn't what's in the LOs. And I will continue to persevere and maybe even to post my contrary posts despite the threat of being downvoted by annoyed classmates :P

Talking formative assessment at e-Fiesta 2014

4 min read

This was originally posted on 8 April 2014 (Wordpress).

I was really pleased and excited when the NIE Centre for eLearning (CeL) invited me to speak at e-Fiesta 2014. CeL suggested the topic, I guess based on what they know about my research interests.


If you're interested in how assessment can be done with social media, watch the video of my talk below, courtesy of CeL. The slides can be viewed below as well.



I see the talk as an exercise in formative assessment with social media in itself, and I want to explain the thinking behind it here.


The invitation came at a time I was planning my PhD coursework essay on digital literacies and I had thoughts of rehashing some of the stuff that was going to go into my essay. Eventually, though, I realised that my goal should be to make both assessment and social (media) learning accessible to a crowd which might be ambivalent on these topics. I also wanted to make it hands-on to some degree, because there's nothing like making people give something a go while they are your captive audience. This is of course harder to manage in a lecture theatre, but also actually helps me to make my case for using social media.


I only had a maximum of 30 minutes to work with, including 10 minutes for Q & A. I decided not to try and be clever about it; it would have a bit on formative assessment, a bit on social learning, before we check out what they look like together.


There were a couple of important considerations. I had to practise some audience awareness, tap on what MOE teachers already know (activate some schema?) and work in some MOE buzzwords. I realised on hindsight that this makes the gross assumption that everyone in the audience would be MOE teachers, but I think there were enough on the day to make it work.


I also had to make sure the tech worked as frictionlessly as possible. This meant keeping the tools simple and mobile friendly, and making sure the audience could access what I wanted them to access as quickly and easily as possible. I started with customised bit.ly links, and added QR codes when Rachel from CeL reminded me that those on their phones and tablets could take advantage of them. I also scheduled tweets that outlined my talk and provided links as I went along. The tweets weren't totally in sync with my talk (I should have rehearsed more), but they ensured that the audience was never totally lost and that folks 'at home' could follow along as well. They also kept my backchannel presence active while I was speaking, perhaps working to pull those monitoring the hashtag into the conversation.


The one thing I wish I did better was managing my time. I have a tendency to go 'off script', which might engage the audience more, but also results in some messiness when time is tight. But I think I succeeded in delivering a session that was engaging without being 'fluffy'. I wanted the audience to go away thinking the issues worth mulling over further and taking action on, but I didn't want the typical academic conference 'snoozefest' presentation (not that I've ever delivered one, ahem). I think the balance I struck was ok for the crowd I had. In fact, I think I actually managed to talk seriously about assessment without inserting too much impenetrable jargon LOL. Naturally, there were a million other things I wished I could have worked into the talk. Thankfully, plenty of questions and comments came in the backchannel (as I'd ask for), and I was able stay on my soapbox for much longer than 30 minutes!


I hope I managed to demonstrate in that short space of time how formative assessment can be integrated into teaching and learning, and how this can be very effectively achieved with social media. I also hope that in experiencing it for themselves as learners, the audience are more inclined to put it into practice as teachers. Lastly, I hope my session sparked some interest in assessment issues. Assessment literacy issues bother me a lot, and every time I 'talk assessment' I hope I'm helping to raise awareness, provoke important questions or otherwise plug the gap in some small way.