To Cheat or Not to Cheat - Part 2
In Part 1 of this blog I explored two frameworks, the Fraud Triangle and Self-Determination Theory, to help understand why learners may be unmotivated and considering cheating and what we can do about some of the current misuse of AI systems. Part 2 of this blog is focused on practical ways we can apply these frameworks. As we delve into these considerations I will specifically point out how your implementation of Yellowdig can address each of the Fraud Triangle dimensions by meeting more of the learners’ psychological needs outlined in SDT. Many of these principles can be applied in other aspects of course design and management.
Pressure
If you do not want learners to feel pressure to cheat on a course element, make completing that part of the course low or no-stakes:
Obviously, we cannot always grade everything on participation, allow learners to retry endlessly, or opt-out of anything challenging. If you make something too low-stakes or not factored into their final grade, learners may also be unmotivated to even begin to participate. That means reducing pressure is a balancing act. However, providing as much autonomy as is reasonable can help increase intrinsic motivation and build a stronger sense of competence, which will reduce the perceived pressure to cheat.
For Yellowdig, we specifically recommend that you:
If learners have a supportive community to help them learn a topic and to pick them up when they fall, they are more likely to see real benefits and value participating. They are also more likely to feel a sense of autonomy and competence and will therefore be less likely to want or need (i.e., feel pressure) to cheat on Yellowdig or on other parts of the course.
The preceding ways of thinking about the participation of individuals and being sure to respect the fact that how each individual contributes will impact the learning outcomes of every other student ultimately leads to healthier communities containing more valuable conversations. If learners have a supportive community to help them learn a topic and to pick them up when they fall, they are more likely to see real benefits and value participating. They are also more likely to feel a sense of autonomy and competence and will therefore be less likely to want or need (i.e., feel pressure) to cheat on Yellowdig or on other parts of the course.
Learner Guidance with Decreased Pressure and Structure
People familiar with Yellowdig’s best practices for community design know that among our strongest recommendations is to not have weekly assignments that require everyone to create a response post. This is not because we think it is bad to influence what they talk about. In fact, it is vitally important to set expectations for learners and help guide them. Rather, the weekly prompting framework leads learners to think very differently about their role in the community, when they should participate, and what they can expect there. Basically, with weekly prompting and deadlines, they focus on the specific assignment and grade (i.e., credential). They also tend to sit down to participate only once per week. While it may be acceptable for any individual learner to do that, if every learner participates once per week, there will be little interaction (i.e., relatedness). Unlike non-social course elements, every learner who chooses to procrastinate or participate half-heartedly is failing to build a better learning environment for others.
Assuming you adopt a shift away from required posts as responses to prompts, an accompanying recommendation is that anything you do want to draw attention to be posted “randomly” throughout the days and weeks rather than building them into a “once per module” design. Conversations need not have strict start or end dates to be valuable for learning, and they unnecessarily reduce autonomy and make conversations feel unnatural or “stilted.” Conversations are exceptionally good for blending together course topics and enable learners to integrate knowledge in ways that modular assignments cannot. A community can provide one of the few opportunities in a course design for learners to blend topics into an integrated and coherent understanding of the subject if you allow it.
If you do use prompts and deadlines in your Yellowdig community, you will have more procrastinating learners with no points right up until the deadline. All else being equal, more learners will feel pressure to cheat to meet the deadline or to lie after they miss it. Avoiding these problems is one reason why our weekly point system has a buffer on the weekly goal and why we use a course-long point goal with one hard deadline at the end of the term. In over 5 years of supporting Yellowdig, every confirmed instance of plagiarism I have become aware of has been in a community ignoring this best practice and using weekly prompts while communicating our weekly rollover as a hard deadline for a weekly discussion assignment. I am not naive enough to believe I am aware of all instances, but I take this as clear evidence that avoiding cheating is helped significantly by finding ways to make deadlines “softer” and the consequences for missing them less severe.
Rationalization
Remember that a base assumption of SDT is that people have an orientation toward growth. Along with the reality that most learners in higher education are putting up time and money, it seems fair to conclude most learners will not cheat without justifying it to themselves. Most learners are probably also not simply immoral or amoral; to cheat but not consider themselves a “bad person” they have to rationalize their actions. Rationalization is a defense mechanism that occurs "when the individual deals with emotional conflict or internal or external stressors by concealing the true motivations for their own thoughts, actions, or feelings through the elaboration of reassuring or self-serving but incorrect explanations" (Widiger et al., 1994) . People rationalize to protect their sense of identity when they are doing something that could be wrong or are not doing something they know they should. The more mental acrobatics needed to create a rationalization, the more difficult they are to generate and less protective.
Some high-level ways to reduce learners' ability to rationalize are to:
For Yellowdig, we encourage the following specific practices to reduce rationalization, each of which is described in more detail below:
Consider the Role of Grades and De-Emphasize Points. Our platform features a gamified point system that “automatically grades.” It may be surprising, then, to hear that one of my practical recommendations is to reduce focus on how learners collect points and to not try making it sound more “fun” by discussing the gamified elements. The reasoning is simple. Learners should be focused on what they should do in the community and what they will gain educationally by participating. They will already be focused enough on what they have to do to earn their grade. For the most part, this is why our platform does not do more to highlight points and earning. The system simply encourages behaviors that lead to healthier communities.
Below is a good example of how a recent Yellowdig Award winner for Community Health, Dr. Ron Guymon from the University of Illinois Urbana-Champaign, presented Yellowdig to his Accounting learners. Dr. Guymon did a good (and quick!) job of focusing learners on the actual goal, how their participation would benefit their learning, and of setting the tone that Yellowdig would be enjoyable and interesting. There is no sense that participation would simply be a series of hurdles to overcome for a grade. The image at the bottom is a quick video he recorded on his phone introducing himself and showing some excitement about what he hoped his learners would get from the experience.
Below is a preview of Dr. Guymon's introductory post:
Depending on how assessments are used and communicated to learners, we risk further feeding into their tendency to focus only on getting better grades; that might be good if it only motivated learning, but cheating can also lead to better grades.
On the opposite end of the spectrum, I once consulted with a partner who shared 10 single-spaced pages of instructions for how learners should participate in Yellowdig, including clear expectations and assessment rubrics. I cannot say with certainty this approach yielded more cheating than Dr. Guymon’s, but we were asked to consult because of poor engagement. Most of the unengaged learners were not even entering Yellowdig because the assignment description was long and demotivating; they did not need to even need to look to convince themselves it was not going to be a good use of their time. If we are lucky enough to get learners to read 10 pages to learn something, it would be better to be about the course topic, not how to get a good grade. The course designers, for all of their good intentions, had made Yellowdig sound like an uninviting obstacle rather than a helpful place to learn and get support.
领英推荐
I think this recommendation is applicable to most assignments. Many modern course design frameworks incorporate assessment in almost every aspect of the course from the outset but rarely discuss the potential impacts of that on learner motivation. Depending on how assessments are used and communicated to learners, we risk further feeding into their tendency to focus only on getting better grades; that might be good if it only motivated learning, but cheating can also lead to better grades.
The true impact of our point system is not automatically grading learners or even “engaging” learners by getting them to care more about getting more points. It helps build a useful community, so learners stop needing the point rewards at all.
Build Intrinsic Motivation to Curtail Rationalization.
The Yellowdig point system is like every grade system in existence, and it is an extrinsic motivator. The tricky part about extrinsic rewards is that their impact can be extinguished over time, or too much focus on them can prevent the formation of intrinsic motivation (see Motivation Crowding Theory ). What is unique about Yellowdig, and prevents many of the concerns around that extrinsic reward from losing impact over time, is that the behaviors encouraged by our point system lead to the formation of social bonds between learners and build communities that fulfill more learner needs. As a useful community forms, more learners are able to find more intrinsically motivated reasons to participate, which creates a positive feedback loop that continues to make the community a more and more valuable place. The true impact of our point system is not automatically grading learners or even “engaging” learners by getting them to care more about getting more points. It helps build a useful community, so learners stop needing the point rewards at all. If learners truly feel something is helping them or is enjoyable, they will require little pressure to participate. They will also have a pretty difficult time rationalizing cheating on the task, even with easy opportunities. When it comes to Yellowdig, if they find the community useful and are already participating more than they need to, there will literally be nothing to cheat on.
I frequently get strongly skeptical looks when I say that the average learner will participate in Yellowdig well beyond their point requirements if you can foster a healthy community that meets their needs. The reality is that so many rules govern their learning that most learners yearn for some autonomy to control their own learning journey. If learners do not see participating as a chore or “busy work” but as a resource for their learning, they will be motivated to participate at least enough that they will not cheat. It becomes very difficult for a learner to rationalize the risks of cheating on something that is not hard and “isn’t so bad,” especially if it is truly helping meet their needs.
Opportunity
AI tools are frightening to anyone who has been using written learner responses for assessment (nearly everyone!) because they provide an easy opportunity to generate work that is not theirs, different from known writing samples, and difficult to detect, given normal flaws in novice writing. Detecting the use of ChatGPT is “blurrier” than detecting plagiarism from both a human and computer perspective and cannot be known with certainty (Steele, 2023) . Especially from small writing samples like those typically submitted for a discussion (<200 words). ChatGPT’s own detector works best with at least a 1000-word sample, requires 250-word submissions, and specifically says it should not be used to punish learners . It has also been removed from the market due to reliability concerns. Learners may use ChatGPT for many of the reasons they plagiarize, but ChatGPT provides easier opportunities with lower odds of being caught and punished severely. This is part of why I argued in a prior blog that the “detect and punish” approach is likely to fail, especially if you consider the odds of false positives and the problems they could cause . So what can we do to tamp down easy opportunities to cheat?
We have seen appreciable improvements in engagement outcomes where the only change was swapping “discussion board” with “community” in the Yellowdig assignment description. Existing learner expectations and what they think their role is as they enter Yellowdig sets the stage for how they behave.
I am sure most people are aware of some of the general ways to reduce opportunity, which is probably the most used way to address learner cheating:
Recommendations for Yellowdig are a bit more specific and maybe more novel:
At the end of the day, learners motivated to learn by a helpful and engaged instructor and by other helpful and motivated learners do not find easy opportunities to cheat… they are simply not looking for them.
Careful attention to how your design choices interact with learners’ psychological needs and motivations and how the context you are creating fosters or tamps down the conditions that lead to academic dishonesty, can significantly reduce the extent to which ChatGPT appears to pose an existential threat to education.
Conclusion
Grades are motivating to learners, but there are two ways to get good grades: 1) Work hard and participate thoughtfully, or 2) cheat. Both will get learners their credentials. Only one of them leads to learning and growth. A community being a valuable place to interact with other learners, and them having an appreciation for the community as a resource to help them learn all but extinguishes the pressure to cheat and the ability to rationalize doing it. Dynamic conversations in an interactive community reduce the opportunity to deceptively “copy/paste” something that is not theirs.
The design ideas in this blog may not always be applicable to all aspects of a course, but they can be applied to help your course meet more learner needs. It is important that we take advantage of the opportunities we have to do this because we often cannot make things like tests lower pressure or better able to provide learner autonomy. Remember, if you believe in SDT, then learners are already motivated to grow. We do not need to do more to motivate learners, we need to do fewer things that thwart their needs so they stay motivated and see no need to cheat.
Ultimately, academic cheating is a complex problem that requires a multifaceted solution. ChatGPT and other generative AI have not simplified that problem. Careful attention to how your design choices interact with learners’ psychological needs and motivations and how the context you are creating fosters or tamps down the conditions that lead to academic dishonesty can significantly reduce the extent to which ChatGPT appears to pose an existential threat to education. In the process of adapting to this reality, we can create learning experiences that are more inclusive, less punishing, and less likely to lead to academic dishonesty. The good news is that these changes will also help promote the engagement, interactions, and learner behaviors that lead to effective and enjoyable learning experiences.
?
About the Author:
Brian Verdine, Ph. D., is the VP of Academic Product Engagement at Yellowdig . Brian received his Ph.D. in Psychology from Vanderbilt University’s Peabody College of Education and Human Development. He went on to a postdoctoral position in the Education department at the University of Delaware, where he later became, and continues to be, an Affiliated Assistant Professor. His academic research and his now primary career in educational technology have focused on understanding and improving learning outside of classrooms in less formal learning situations. At Yellowdig, he manages all aspects of Client Success with a strong focus on how implementation in classes influences instructor and student outcomes.
This post was originally published on yellowdig.co .