Thursday, March 5, 2015

Assessing the Assessments



We see this pattern with learning technologies as well. Why aren’t online assessments keeping up with new technology? Because they can’t, and we can’t. The pace of technology growth is outstripping our ability to keep up with it. Furthermore, you are combining the field of technology (a fast-growing, fast-paced industry) with pedagogy and education, a field that is notorious for its love of tradition and reticence to change. You have new technologies that crop up constantly, and early adopters are saying, “Who-hoo! Digital glitter! Shiny! New! This will make learning fun, and show our stakeholders we’re culturally relevant and on the cutting edge!” Meanwhile, traditional pedagogues are saying, “Wait a minute. That hasn’t been tested! Who’s to say that actually WORKS in making the learning process more effective?” This is essentially the reason Whitelock and Watt (2008) gave for the current gap in assessments against the new tide of technology. It’s one thing to get creative with the lesson plan and throw in social media or some new technology, but when it comes to the assessment and metrics, people want tried and true methods for testing if the learning was effective.

So that’s one issue. Another issue is that some of the old multiple choice test questions, which get translated into e-Learning as simple multiple choice quiz slides, are woefully inadequate for testing most of the upper tiers of Bloom’s Taxonomy. It’s hard to know if a mechanic is learning how to do their job when you are asking them trivia about transmission parts or the history of a car instead of having him demonstrate how to take one apart and put it back together. So why do we stick with multiple choice and assessments? That’s what we’ve always done—again the tradition answer—and it’s also easy. Doing a demonstration assessment, especially online, presents certain technical challenges that require a lot of thought, effort, and time to come up with (Clare-Midura & Dede, 2010).

Daphne Koller addressed this problem on the TED stage, and discussed how anything other than multiple choice answers online was hard to pull off. How do you have a quiz auto-grade an essay or short answer, for example? One suggestion she gives is self-assessment coupled with peer review. This dovetails in with the suggestion of Midura & Dede, 2010 that mentoring and direct observation with feedback, etc., are more effective methods than paper tests or multiple choice. While peers are seen as being on “equal footing” to their other peers, this is not consistent across the board. Each peer has strengths and weaknesses in a variety of areas, and can mentor others in those areas where they are strongest. The cumulative interaction of peers creates a kind of mentor in an online space.

We have many cool opportunities for new ways of looking at assessments, whether through virtual worlds or a video conference call where people share their screens and demonstrate what they know, etc. Mini-games, as long as they are tied to the learning objective, can also be an effective and fun assessment. What needs to happen now is that these methods need to be tested. When you look at something like Lumosity.com, it boasts how their games are based on “brain-science,” and that they can boost your memory and other cognitive skills. However, the consensus at this point is that these games don’t work. What’s actually happening is that you are getting better at playing the games themselves, but there is no evidence as of yet to suggest that this translates to anything outside of playing these games. This should be a warning to us regarding adopting new methods of assessment too soon. However, one rule of thumb is to keep the assessment as close to the real world as possible. The more the assessment feels like the real thing, the better chance you have of making that assessment more effective. This is why flight simulations work and Luminosity.com doesn’t.

What are your thoughts for the future of assessment? Leave a comment if you’d like.

References:
Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309-328.
Retrieved from the Walden Library using the Education Research Complete database.
Koller, D. (Speaker). (2012, Jun.). What we’re learning from online education. Ted Talks, LLC. Retrieved from: http://www.ted.com/talks/daphne_koller_what_we_re_learning_from_online_education?language=en
Whitelock, D., & Watt, S. (2008). Reframing e-assessment: Adopting new media and adapting old frameworks. Learning, Media, and Technology, 33(3), 151-154.
Retrieved from the Walden Library using the Communication & Mass Media Complete database.

No comments:

Post a Comment