The Online Journal of Peace and Conflict Resolution
OJPCR: The Online Journal of Peace and Conflict Resolution is a resource for students, teachers and practitioners in fields relating to the reduction and elimination of destructive conflict. It is a free, yet valuable, source of information to aid anyone trying to work toward a less violent and more cooperative world.
Creating a More Peaceful Classroom Community by Assessing Student Participation and Process
John V. Shindler
For a printer-friendly PDF version, click here.
Many teachers incorporate some form of assessment of their students' class participation. It might be called group work, lab process, cooperative group behavior, or class participation, but it comes down to essentially the same thing, that is, assessing the quality of a student's non-academic performance with a subjective criteria. Richard Stiggins (2001) suggests, "In one sense using observations and judgments as the basis for evaluating student dispositions is a practice as old as humankind. In another sense, it is an idea that has barely been tried." This article examines the abundant benefits and substantial cautions related to using a system for assessing student participation and/or process and offers practical steps for the development of a working system for use in the classroom.
On the one hand, with a sound, well-defined, systematic, student-involved procedure that is reliable in the minds of both teachers and students, assessing "participation" has the capacity to produce a substantive positive influence (Craven and Hogan, 2001; Lyons, 1989). It can provide a class of students with a structured pathway to more peaceful functioning and a foundation for good classroom management. As Craven and Hogan (2001) suggest, "effective classroom management is established by clearly communicating expectations to students and defining acceptable levels of performance." They found that evaluating student participation provides these clear expectations as well as solutions to many of the current demands on teachers related to making learning more authentic while being required to respond to accountability measures. On the other hand, giving a grade for "participation" that is vague, undefined, and seen as a subjective judgment, will have little benefit and is more likely to have a harmful effect overall (Shindler, 2002). Used arbitrarily, it will likely be seen by students as a part of their grade over which they have little control and just another tool for the teacher to reward students they like and punish those they do not. For these reasons, I feel that as teachers we should implement participation assessment thoughtfully or refrain from using it at all.
While, the primary incentive for adopting a formal system for assessing student participation would seem to reward good behavior in an attempt to change it, a deeper examination reveals at least three other significant benefits. First, much of the potential power of such a system for student change comes from the fact that the focus of the assessment is student-controlled behavior. Assessing 100% student-owned behavior promotes a sense of internal locus control within students and consequently more self-esteem (Benham, 1993; Rennie, 1991). Promoting internal locus has been shown to have a positive effect on academic motivation as well (Covington, 1998; Maehr, 1997). One of the most significant long-term benefits of assessing student-owned behavior is its capacity to help students shift their orientation away from what Carol Dweck (2000) refers to as a fixed view of intelligence/ability and a "helpless pattern" toward a "mastery pattern." Dweck describes how the fixed intelligence/ability orientation student persists and ultimately achieves less than the students who put more focus on their level of investment and the process of solving a particular learning challenge. This cause and effect relationship between effort and success is clarified in a very real and meaningful manner when we assess the investment rather than that which is substantially a byproduct of talent.
Second, a system of assessment that gives formal attention to the quality of interpersonal interactions helps open students thinking to an epistemology that places the welfare of others into the domain of one's own success. Shepard (2000) speaks of the need for assessment practice to move away from an objectification of knowledge toward more constructivist structures that "promote learning as a process of mental construction and sense making" and which does not extract the reality that learning occurs within a human context. In a very real and experiential sense, what we assess characterizes that which is important and creates a tacit definition of success in our classes (Shindler, 2002). The epistemology of value is played out daily in our assessment practices. This relationship then raises the question: Do we want to place formal value on the quality of the human interactions in our classes? If we see the role of education as one that teaches whole persons, we might consider bringing a greater range of domains, including the quality of interpersonal behavior and the process of meaning construction into what we systematically assess.
Third, In the process of using such a system, students collectively develop a definition for quality interactions (and/or quality personal investment in one's learning), and students internalize a collective concept for those behaviors. This concept, clarified by examples and non-examples of behaviors that help the group function in a healthy manner, provides a collective language and ethos for the group. While class discussions are useful in developing such an understanding, as Tanner (1994) suggests, assessment provides a vital dimension to the students' development of the concept of a good outcome and provides a concrete and meaningful mechanism for reflection. Tanner (1994) noted in her research, "participation in peer and self-assessment was found to involve the student in a recursive, self-referential learning process which supports the explicit development of meta-cognitive skills." Users of a soundly implemented system for assessing process and/or participation witness this recursive process of action and reflection leading to higher levels of awareness and, ultimately, growth. While an outside observer might assume that any behavioral change resulting from such a system would come as a result of a compliance response (e.g., doing the right thing because it is being graded), in fact little long-term change results from this area of motivation. However, a longitudinal analysis of an effective system typically finds that behavior does in fact improve increasingly over time. What may be surprising is that this increase can be rooted, not so much as a result of an external response, but from how the cognitive development of the concept affects behavior, and how an understanding of that concept emerges organically from the group (Smith, 1996). As the group discovers over time what makes them feel most effective, emotionally safe, and communal, their concept of "good participation" evolves to meet those needs.
Whether one's motivation for using a system for assessing the quality of participation is to help create a classroom epistemology that values process and investment in the communal good, or is just to get a better quality of effort from students, a well-designed and implemented system can help. As a new teacher, I had previously witnessed such a system work very effectively. My initial motivation for adopting it was to have well behaved students like those that I had observed. Not surprisingly, I found that it did work. I got a better quality of that which I assessed. I noticed that my problem students changed dramatically over the course of the year. They were able to shed their pattern of negative identity and act in a more positive and ultimately satisfying manner. Moreover, the students who had come to me with good work habits and interaction skills felt validated and increasingly took to their roles as leaders and/or contributors to the "common good." I also found that when students invested in the process, both academically and interpersonally (motivated by the fact that it was formally assessed), the outcomes usually took care of themselves. So, while I was initially attracted because of the system's ability to help promote a better behavior, what hooked me was its ability to promote a self-directed, reflective community of learners.
To develop a sound system for assessing the quality of your students' participation, a series of choices is required as you progress through the construction of your system. The choices that you make at each of the seven steps outlined below will define the nature of your system and should reflect the needs of your students, your school, and your personal vision for your classroom.
Step 1: Choose a focus area
The first step in the process of creating an assessment system is to define the behavior or process area that is to be the focus of the assessment. It could be said that when implementing a new system of any kind, if it solves an existing problem or provides a benefit that has not been previously experienced, it will last. So if you do not feel you have a need for such a system, it will not likely take root in your class. It might be useful to begin with the question, "what behavior or process, if my students did better with it, would improve the class?" You are beginning the process of creating a system to help your students reflect on and formally examine one or more specific behaviors and/or processes. What is it they could use help in improving? Some examples might include the quality of cooperative group behavior, general individual participation, lab work, station work, listening, preparedness, the process components of a performance task or workshop, or individual effort. Try to be as narrow as possible at this stage. The more focused your definition, the better your system will work.
Step 2: Select a unit of analysis
It is important that you make a clear choice at this stage as to what level your assessment will focus on: individual, group, or class. For instance, is your unit of analysis how an individual performs, either within an independent context such as a computer station or within a group such as a cooperative learning exercise? Or, given that same cooperative context, are you more interested in assessing the functioning quality of the group as a whole? Maybe you just want a way to help the class reflect on how they performed as a whole. This step in the process gets at the level of accountability that you seek. There are benefits for every level. Individual assessments are often more reliable than group assessments and more comfortable for students with better habits and/or a heightened sense of individualism, whereas group assessments better promote interdependence and, over time, a greater sense of community in a class. Whole-class assessments can be just as useful in creating the language of your system, but typically lack the accountability that helps promote behavior change.
Step 3: Determine the purpose(s) for adopting your system and thus the degree to which you will use it more formally or informally
Have a clear intent for adopting such a system, especially as it relates to student grades. Reflect upon what you are trying to accomplish by the use of your system. It is possible to use it as a formal part of each student's grade. It is also possible to use it very systematically but outside the realm of formal grades, or even very informally. Is your primary purpose:
If you want to give formal participation grades, it is essential you construct a technically sound system and make a substantive commitment to a very deliberate observation and data collection procedure. You need to make this subjective assessment process as objective as possible. The benefit of giving formal participation grades is that a grade shows formal value and it has the power of a tangible reward. The downside to giving grades is that they can take some focus away from students' intrinsic motivation for growth. In addition, it puts the teacher in the role of evaluator. That role may or may not be where you want to be. If you are unsure, one idea is to start with an informal use of a system and then move to a more formal usage as you feel the need.
Step 4: Operationalize what you mean by "good _______."
Depending on the concept that you choose, be it participation, cooperative learning, group process, lab work, and so forth, your system will work more effectively the more clearly you define it in concrete operational terms. A teacher can do this on her or his own, but this is a very good place to get your students involved. Taking the primary role in creating their own concept of a "good _____" helps a class both understand it better and own it more personally. At this stage, it can work well to use an inductive concept attainment model to develop your concept. Begin by asking yourself or your students, whichever the case may be, the following question, "What are those behaviors that, if we did a better job with them, would make us better learners individually and collectively?" Give yourselves the following three qualifications:
All ideas have to be things that each of us could do if we chose to. In other words we need to be 100% in control of these outcomes. So, for instance, it cannot involve things that are related to intelligence, popularity, cultural capital, or material resources.
Nothing in your definition can penalize students' personalities, learning styles, or cultures. So, we could not reward people who raised their hands a lot or talked the most, for example, as that would be bias the situation in favor of extroverts.
All ideas need to be describable in concrete specific language. For instance, instead of using a phrase such as "good class members are nice to each other," be more specific, such as "good class members only say positive things about other classmates and refrain from all put downs." That is, any observer given your definition would need to be able to reliably differentiate whether a behavior was or was not being demonstrated. We should "clearly" know them when we see them or the absence of them.
Figure 1 shows an example of what one 5th grade class did when asked to define the concept of a "good cooperative learning group member." (Remember, this is just one example, these are by no means the only descriptors students might suggest)
Figure 1: A three-factor definition of "Good Participation" during group work.
After developing your definition, you may want to enlarge it and post it conspicuously on the wall of your classroom, art room, music room, or gymnasium. Displaying it alone is useful, as it provides a visible reminder to students of the concept and the language they have created. However, concepts are learned and defined over time by examples and non-examples. Using the language you have created at this stage to help the class interpret behavioral choices will bring the concept on your wall to life. Yet, if you stopped here, while you would have working concept for what constitutes "good ____" you would not be able to reliably make distinctions of quality. Step 5 takes our concept and puts it into the context of a quantifiable assessment method.
Step 5: Create an assessment instrument that is soundly constructed and easily interpreted
The next step is to put the concept you have created into a sound rubric that fits the context in which you intend to use it. This instrument will help "systematize" your definition, and provide you and your students with concrete specific language and a framework for recognizing levels of quality in your concept. As with the use of any performance assessment rubric, the instrument you create will help both in the diagnosis of problems and prescription for improvement. Used purposefully, it will help reduce the arbitrary and subjective nature of giving feedback to students. And maybe most importantly, it can help take the teacher out of the role of the judge and into that of facilitator (Flemming, 1996).
It is vital that your rubric is well constructed, as technical problems develop into human problems very quickly (Shindler, 2002). There are five important considerations to keep in mind when constructing your rubric:
Using the example of the class discussed earlier, when they were asked to come up with what makes a good group member they decided that there were three main factors: being cooperative, being positive, and trying. Hence, we would take these three factors and put them into a soundly constructed rubric. In fact, if we did a good job at step 4, our words can essentially be imported directly into the top level of our rubric. One might consider creating a single holistic scale, but in this case, given that there are three distinct concepts involved, it works better to construct a primary trait or analytic-type rubric using the three areas. When completed it might look something like the example in Figure 2.
Figure 2: Levels of quality for being a cooperative learning group member
It is possible to create as many or few levels as seems to make sense (but 3 or 4 seem to work best generally), and label the levels any way you feel best fits your context (i.e., 4,3,2,1,0 or +,v+,v,v-,- or A,B,C,D,E, etc.).
Having this scale conspicuously displayed on the wall or in a handout gives the students an available roadmap for how they are being assessed, which not only promotes reliability and meaningfulness to the evaluation, but provides a clearly articulated concept of the qualities that are going to make your students individually and collectively work to their full potential. The human mind can only achieve that which it can conceive. We cannot blame our students for dysfunctional behavior when, by definition, they are acting on the best conceptions that they currently possess. Therefore, if a student is making a choice to perform less than his or her best on a given day, given that the behavior being assessed is 100% within their control, holding that student accountable sends the message that you believe they can do better. I have used this system with all grade levels, including1st graders, and even at this age students are very aware that their behavior is a result of choice. When at the end of a day one group of 1st graders evaluated their collective behavior and unanimously stated that "We were about a 2 today, but tomorrow we will be a 3," one could see the understanding of the cause and effect relationship between investment and learning outcomes being evidenced. Rubrics have the capacity to produce two very powerful effects. First, they can make the concept of quality concrete and accessible (Shindler, 2002; Stiggins, 2001). Second, they can draw the student, first psychologically and then behaviorally, upward toward her or his highest level (Craven 2001; Shapard, 2000; Shindler, 2002; Tanner, 1994). As Stiggins (2001) suggests, "if we have targets that are clear and standing still, students will reach them." Therefore, given a collectively established, visible scale with ascending levels of quality that each student is capable of achieving, the natural tendency is to shoot for the target at the top. And they do. Yet, if we have no such targets, we might ask, what are our students shooting for?
Step 6: Incorporating your system for assessing participation
Once you have developed a sound instrument, you are ready to put it into practice. Yet, implementation may require more art than science. As discussed earlier, the best systems are those that become a natural part of the class and are consistent with the needs of both teacher and student. As you begin to find ways to incorporate your system, keep in mind that it should evolve as your needs evolve. Invite "constructive criticism" from students periodically. Build in class time to "assess the assessment." But do not confuse complaints of students regarding their grade or with the challenging nature of the system itself for meaningful feedback. In fact, expect some level of revolt. You are asking students to respond to a new assessment paradigm. It will take time for the students with the more self-centered patterns and those who have previously had to invest little to produce acceptable work to embrace the change. But they will, and you need to be committed enough to live through some of the growth pains.
Here I offer suggestions for use for each level of focus and degree of formality. You may find that you want to incorporate your system for more than one classification. That can work, but be sure not to lose your clarity of purpose or confuse your students. The best way to evaluate your system's success is to ask your students to explain the system. If they can (and they are not doing it with a frown), you are probably doing pretty well.
Using your scale with individuals
If you choose to formally assess the quality of each student's participation, or group work, or cooperation, and so on (that is give them a grade for it), you need to have done a very careful job of steps 1-5 or you can do needless damage to your relationships with your students. Here are a few critical actions to take to insure that your system works to everyone's benefit:
If you are going to formally assess participation in some form, it is critical that you have an efficient method to observe and collect data from all students so as to obtain a sufficient and representative sample. And you need to collect this data in a way that does not lessen your ability to teach and interact with students. Here are some of the practical considerations in developing a system best suited to your situation:
Share your assessments in private. Let students see how they did (in your analysis) as soon as possible after the event and make sure all scores are confidential. If not it will likely be seen as a way to favor the "good students" and shame the "bad students."
These scores are just another piece of information regarding a measure of class performance, so make sure you deal with them in an objective/non-personal manner. Do not praise or be disappointed in the score as you give them. It should be no different than a test grade.
Every score should be viewed within the context of growth and problem solving. If a student receives a top-level score, then have them reflect on what they did to achieve it. If the score is less, then help the student understand what they need to do to improve their score. Remember the power of the rubric is its ascending structure. Use that thinking!
Should I have the students assess one another and/or themselves? It depends. If you are using participation assessment informally (i.e., not having the assessment count as part of the grade), then self-assessment is encouraged. It can be a very educational process that helps reinforce the concept. However, if you are going to use the assessment as part of a formal judgment about the quality of a performance that goes into the grade book, then putting students in the position of assessing each other will likely lead to bias scores and hurt feelings. The best rule here is to let the students do informal assessment (e.g., writers workshop), but when it counts it should be done by an impartial, practiced adult.
How much time should I spend assessing? How long do I need to watch each student? Try to give each student at least two or three good looks during an activity. You will need some time between each one to get a representative sample, especially if you are using the word "consistency" in your rubric. Usually 10 to 30 second observations will give you a decent sense of what is happening. So in the course of a 40 minute time frame, you would need to be in the role of assessor for about 10 minutes.
When do I record grades? First of all, this procedure needs to be pretty unobtrusive if not invisible. Do not hover over students with your grade book. The students need to see you in the role of instructional facilitator first and foremost. Plus, you are looking for authentic behavior not acting. Secondly, grades need to be recorded pretty immediately after your second or third glance. Do not rely on your memory. The ideal would be grades recorded near the end of, or immediately after, the activity.
How often do I need to assess? You need to do it regularly or your sample will be less representative and assessments will be less valuable to the students and/or will lose their importance. More than once a week is preferable. What makes your system effective, in part, is that it provides a source of regular feedback to students. Having each student's participation grades open and available for them and them alone to see at any point is important. There should be nothing covert about this process. Keep in mind that most likely at first you will need to explain why you are giving certain less than top level grades to students. But these interactions are a chance for you to provide direct feedback to students and ultimately very educational for both student and teacher. The need for these clarifications will decrease quickly if your system in sound.
How can I be sure that I am being fair? Pay close attention to yourself as an assessment instrument. Are you a bias-free judge? Do you have expectations that affect your ability to give each student what he or she earned? Would you really give a "3" or a "0" to any and all of your students if their behavior warranted it? If you want to check your reliability then have someone else make assessments of your students with the same rubric during the same period and then see how your scores match up. The scores should agree.
If you have concerns about giving formal grades for participation or feel unable to make a commitment to the time and attention required to sustain a sound and reliable system, but like the idea of giving students individual feedback related to the level of their participation, you might want to use the rubric created in steps 1-5 in an informal manner. Some ideas for informal use include:
Using your scale with groups
With groups, you again have the choice to use your assessment informally or formally. A formal assessment of a group process functioning or group behavior may help some groups focus on the process and the quality of their effort to a greater degree. Again, be sure the language in your scale uses the group as the unit of analysis. And the same care related to clarity and reliability should be taken as with the assessment of individuals. Here are a couple ideas for using you scale formally with groups:
During any prolonged cooperative group effort spend some time with each group. Of course your primary role is as instructor/facilitator, but let the students know you will also be recording a participation grade for the group as a whole. Use the language from your scale to reinforce positive behavior and provide feedback to groups. If the group is struggling, you might ask them to consider what insights the "good group participation" rubric offers to help them problem-solve through their conflict. If you choose to use this technique, it will be difficult at first for some students. It is best to reconfigure groups frequently. But send the message that you trust that they can solve their problems on their own. Help students learn to trust their own resources and skills to function at the top level. Consider adding a group grade to the project grade. This is especially rewarding to students who make an excellent effort but are not your most academic students. Students who are investing in the process and trying will almost always do excellent work in the end.
However, if you choose to use your system more informally you might consider the following:
Using your scale with the whole class
While there may be a temptation to use your system to bribe the class into good behavior, that tends to shift the locus of ownership to you and away from them. Instead use the concept displayed on your wall to help students recognize what they do, individually and collectively, that makes the class a better place. So stay out of the role of judge, be the facilitator. Ask guiding questions, and avoid lecturing, bargaining, nagging, or threatening.
Take time at the end of any group activity and use your concept to debrief what has just taken place. This is especially useful after an extended cooperative effort. If you habitually debrief related to the product outcome, students learn that the product is the important thing. If you do not debrief, it sends the message that the point of the activity was to get it done. If you debrief related to the process, students learn that how they got there was important and they have the opportunity to reflect on the means they and others used to produce quality work. Whether you decide to assess your "good ____" concept formally or informally, I would suggest taking two minutes at the end of any group activity and using it to debrief. This investment of time will pay for itself many times over with its effect on your classroom climate as well as management. Ask students age appropriate questions such as, "Who can tell me about someone at your table who showed a positive attitude today" or "Which group solved a problem cooperatively" or questions related to any of the traits in your concept. At first, students will be a little hesitant, but after doing this a couple times, you will have every student's hand up begging to brag about one of their peers. This is a powerful time for two reasons. First, it feels great for both praiser and praisee. Second, it works as a concept development exercise clarifying examples and non-examples of your concept for "good _____ ."
A final thought: if you try this idea, you need to be patient. While a sound system can have a profound effect on your class, keep in mind that most of the benefits will come in the long term. Since you are assessing the inherently intimate and intrusive area of student affect/dispositions, expect critics. Expect students and even parents and administrators to question you. It may be clumsy at first, and you may not see immediate results. But, remember, much of why this works is that it helps gradually change each student's learning orientation to one that is more self-responsible, process-focused, and communal. Fundamental change is never easy, so give it time.
Benham, M. (1993) Fostering Self- Motivated Behavior, Personal Responsibility, and Internal Locus of Control , Eugene, Oregon.. Office of Educational Research and Improvement (ERIC Document Reproduction No. ED 386 621).
Covington, M. (1998) The Will to Learn: A Guide for Motivating Young People. Cambridge University Press. Cambridge England.
Craven, J. & Hogan, T. (2001) Assessing student participation in the classroom. Science Scope, 25 (1) 36-40.
Dweck, C (2000) Self Theories: Their role in motivation, personality, and development. Psychology Press, Philadelphia, PA.
Fleming, D. (1996) Preamble to a more perfect classroom. Educational Leadership, 54, 73-6.
Hendrickson, M. (1992) Assessing the student-instructional setting interface using an eco-behavioral observation system. Preventing school failure, 36, 26-31.
Lyons, P. (1989) Assessing classroom participation. College Teaching, 37, 36-38
Maehr ML & Meyer, H.A. (1997) Understanding motivation and schooling: Where we've been, where we are, and where we need to go. Educational Psychology. Review, 9, 371-409
Rennie, L. (1991) The Relationship between Affect and Achievement in Science. Journal of Research in Science Teaching, 28 (2) 193-09.
Shepard, L. (2000) The role of assessment in a learning culture. Educational Researcher, 29 (7) 4-14.
Shindler, J. (2002) Exploring various structural options for performance assessment scale design: Which rubric is best? National Forum of Teacher Education Journal, 12 (2) 3-12.
Skilling, M. & Ferrell, R. (2000) Student-generated rubrics: Bringing students into the assessment process. The Reading Teacher, 53 (6), 452-55.
Smith, J. (1996) Assessing children's reasoning: It's an age old problem. Teaching Children Mathematics, 2, 524-528.
Stiggins, R. (2001) Student-Involved Classroom Assessment. Prentice Hall. Upper Saddle River, NJ.
Tanner, H & Jones, S. (1994) Using peer and self-assessment to develop modeling skills with students aged 11 to 16: A socio-constructive view. Educational Studies in Mathematics, 27 413-31.
Dr. John Shindler is an Assistant Professor of Curriculum and Instruction at California State University, Los Angeles and a former teacher. He is the founder and Co-Director of the Western Alliance for the Study of School Climate, and the developer of the Paragon Learning Style Inventory. Currently he instructs teacher education courses as well as publishing in the areas of Classroom Management and Assessment.
The Online Journal of Peace and Conflict Resolution is published by the Tabula Rasa Institute.
Article Copyrights held by authors. All else ©1998-2003 Tabula Rasa Institute.