Saturday, March 14, 2015

When and Why Cheating is Cheating, and How to Deter It



I’m making a late post for my Online Strategies course about Plagiarism and Cheating in online environments. In order to create this post, I watched a video by Dr. Rena Palloff and Dr. Keith Pratt about Cheating and Plagiarism. In it, Palloff stated that the rate of cheating in online and traditional environments were about the same, and that from her experience, students did not set out to be cheaters, and intentional cheating and plagiarism is rare. Pratt then added that he created his assessments in such a way that he didn’t care whether or not students cheated, because the assessments and assignments were meant to reflect real-world environments. So, a student could talk to a neighbor, or look something up in a book, but there was more work to be done than that to complete the assessment. Palloff agreed, and said that when the assessment requires one to apply available information to problem-solving, it makes it more difficult to “cheat” in the traditional sense. Here, the assessment requires more than memorization, but synthesized application (Laureate Education, Inc., n.d.).

I think that is one of the most important points about cheating and plagiarism: Cheating isn’t cheating because a student has found an easy way to get the answers to a test. “Cheating” is cheating if and when it undermines the learning process and makes what should be an equal playing field unequal. If the assessment is measuring critical thinking and problem solving rather than memorization, then looking something up in a book isn’t “cheating.” It’s being resourceful, which is what we do in “the real world.” Therefore, the construction of the assessment towards critical thinking and problem solving can encourage intellectual and academic honesty. They can’t “Google” the right answer because there isn’t one right answer. The student has to find one in their own words…

Educating students early in the online environment and managing expectations at the onset is also important to prevent unintentional plagiarism. However, the research might not agree with Palloff that plagiarism and cheating is rare, or that expectation management alone with deter the vast majority of plagiarism and cheating. Cheating and plagiarism here, refers to lifting partial or whole passages of another person’s work and leading one to believe it is your own when it is not. A study at Penn State University using Turnitin detected plagiarism in about 13 percent of cases, whereas manual detection only caught plagiarism in the same set of assignments 3 percent of the time. Further, the research showed that expectation management and education regarding plagiarism may have made some impact, but the difference was not statistically significant. So, tools like Turnitin seem necessary if you want to prevent people from borrowing the words and thoughts of others without proper attribution (Jocoy, & DiBiase, 2006). Again, the key here is not using another’s work—that’s encouraged. It’s simply using another’s work without proper attribution, or using so much of the work that it undermines the learning process. Where it becomes clear that the student was trying to merely complete the assignment rather than think critically about a problem and use another’s work to support their own thought process. This may be painfully obvious, but it seems worth it to point out what makes “cheating” cheating, or wrong, or counterproductive to learning…

The best method to prevent dishonesty and cheating or “stealing” of another’s work would likely be the way that we prevent all other forms of stealing and dishonesty: Deterrence. You don’t have to make cheating impossible. You just have to make it harder than doing the honest thing. Once you reach that point, cheating loses its appeal. By making the assessment more about critical thinking than facts, it makes cheating harder. Plagiarism detection software makes it harder for students to find another person’s thoughts on the Internet and pass it off on their own without getting caught. Combine those two tools and methods and one of the only courses of action a student has left is to ask or pay someone else to complete the assessment for them. If the assignment is written, even in this case, a keen facilitator can recognize the change in tone from the students typical writing to another assignment they handed in. They would have to plagiarize nearly everything, or nothing. The idea of cheating starts to look ridiculous, and the probably of getting caught looks high. At this point, you’ve effectively deterred most forms of cheating. And I think that’s the goal. Deter 97% of cheating, and catch the other 3% that will be determined to cheat even when it doesn’t make sense to do so. That’s my best advice…

References:

Jocoy, C., & DiBiase, D. (2006). Plagiarism by adult learners online: A case study in detection and remediation. International Review of Research in Open & Distance Learning, 7(1), 1-15.

Laureate Education (Producer). (2010). Plagiarism and cheating [Video file]. Retrieved from https://class.waldenu.edu

Thursday, March 5, 2015

Assessing the Assessments



We see this pattern with learning technologies as well. Why aren’t online assessments keeping up with new technology? Because they can’t, and we can’t. The pace of technology growth is outstripping our ability to keep up with it. Furthermore, you are combining the field of technology (a fast-growing, fast-paced industry) with pedagogy and education, a field that is notorious for its love of tradition and reticence to change. You have new technologies that crop up constantly, and early adopters are saying, “Who-hoo! Digital glitter! Shiny! New! This will make learning fun, and show our stakeholders we’re culturally relevant and on the cutting edge!” Meanwhile, traditional pedagogues are saying, “Wait a minute. That hasn’t been tested! Who’s to say that actually WORKS in making the learning process more effective?” This is essentially the reason Whitelock and Watt (2008) gave for the current gap in assessments against the new tide of technology. It’s one thing to get creative with the lesson plan and throw in social media or some new technology, but when it comes to the assessment and metrics, people want tried and true methods for testing if the learning was effective.

So that’s one issue. Another issue is that some of the old multiple choice test questions, which get translated into e-Learning as simple multiple choice quiz slides, are woefully inadequate for testing most of the upper tiers of Bloom’s Taxonomy. It’s hard to know if a mechanic is learning how to do their job when you are asking them trivia about transmission parts or the history of a car instead of having him demonstrate how to take one apart and put it back together. So why do we stick with multiple choice and assessments? That’s what we’ve always done—again the tradition answer—and it’s also easy. Doing a demonstration assessment, especially online, presents certain technical challenges that require a lot of thought, effort, and time to come up with (Clare-Midura & Dede, 2010).

Daphne Koller addressed this problem on the TED stage, and discussed how anything other than multiple choice answers online was hard to pull off. How do you have a quiz auto-grade an essay or short answer, for example? One suggestion she gives is self-assessment coupled with peer review. This dovetails in with the suggestion of Midura & Dede, 2010 that mentoring and direct observation with feedback, etc., are more effective methods than paper tests or multiple choice. While peers are seen as being on “equal footing” to their other peers, this is not consistent across the board. Each peer has strengths and weaknesses in a variety of areas, and can mentor others in those areas where they are strongest. The cumulative interaction of peers creates a kind of mentor in an online space.

We have many cool opportunities for new ways of looking at assessments, whether through virtual worlds or a video conference call where people share their screens and demonstrate what they know, etc. Mini-games, as long as they are tied to the learning objective, can also be an effective and fun assessment. What needs to happen now is that these methods need to be tested. When you look at something like Lumosity.com, it boasts how their games are based on “brain-science,” and that they can boost your memory and other cognitive skills. However, the consensus at this point is that these games don’t work. What’s actually happening is that you are getting better at playing the games themselves, but there is no evidence as of yet to suggest that this translates to anything outside of playing these games. This should be a warning to us regarding adopting new methods of assessment too soon. However, one rule of thumb is to keep the assessment as close to the real world as possible. The more the assessment feels like the real thing, the better chance you have of making that assessment more effective. This is why flight simulations work and Luminosity.com doesn’t.

What are your thoughts for the future of assessment? Leave a comment if you’d like.

References:
Clarke-Midura, J., & Dede, C. (2010). Assessment, technology, and change. Journal of Research on Technology in Education, 42(3), 309-328.
Retrieved from the Walden Library using the Education Research Complete database.
Koller, D. (Speaker). (2012, Jun.). What we’re learning from online education. Ted Talks, LLC. Retrieved from: http://www.ted.com/talks/daphne_koller_what_we_re_learning_from_online_education?language=en
Whitelock, D., & Watt, S. (2008). Reframing e-assessment: Adopting new media and adapting old frameworks. Learning, Media, and Technology, 33(3), 151-154.
Retrieved from the Walden Library using the Communication & Mass Media Complete database.