Friday, July 3, 2009

Assessing participation in online discussion groups

Background
How does the formal online education environment differ from a traditional classroom environment? Should participation in online discussion groups be always assessed as a rule? Or should it be assessed only in certain situations? If participation in online discussion forums is assessed, what should we measure? This paper seeks to address these questions.

The comments here are in the context of education that is delivered entirely online and dealing with students with over 12 years of education. This paper does not touch upon the issue of identity which is inherent in any exclusively online learning environment.

Comparing the classroom and the online learning environment

The teacher, being physically present in the classroom, is in a position to know if students are following what is being discussed and assess the participation of individual students. While classroom participation is sometimes assessed, it is not mandatory and usually not a part of a summative- end of the year assessment. Formative assessment of classroom participation can provide valuable feedback to the student leading to a positive impact on the end of the year assessment.

The formal online learning environment is different. The teacher and participating students have very little or no face to face contact. The online discussion groups are one way of knowing the extent of student understanding (without having to wait until the assignments are submitted), providing scaffolding and progressively guiding students towards the expected learning outcomes. Given the commitment to flexibility of study (and to some extent assessment) schedule that online learning models offer and geographical, cultural and time differences, how do you encourage participation in online discussion groups (especially in an asynchronous discussion)? Assigning a certain proportion of marks for participation seems to be a popular strategy (Palmer, Holt and Bray, 2008). However, is it relevant for the end of the course/ summative assessment? Is it authentic, reflecting real-life conditions that have meaning for the learner? Is such an assessment valid and reliable?

Relevance of Online discussion groups

The interaction between student peers and of students with the teachers is vital to learning. At a primary level, the online discussion groups serve as a means for teachers to determine student progress in order to make any formative assessment and help them know when to move on to the next topic.

More importantly, research indicates that asynchronous discussions in an online learning environment contribute to learning. Further, learning that occurs in such a community is situated and the learning process and outcomes are shared across the learners (Han and Hill, 2007).

It is not delivery of content via online that makes the difference- it’s the scaffolding and the overall student experience (McKey, 2000). A testament to this would be in the fact that MIT (Massachusetts Institute of Technology) offers most of its course content online and free! In the absence of a classroom, the online discussion area becomes a central place for testing of ideas, expression of views and debate. It provides students an opportunity to reflect on what they have learned and express themselves.

“Active learning is linked to students’ ability to apply knowledge to new contexts” (Bransford, Brown, and Cocking, 2000) cited in Craven and Hogan (2001) (p.37).

The sharing of information and different perspectives in the online discussion groups leading to co-construction of contextual knowledge that occurs is what makes it valuable. They also provide a permanent record of student contribution to the collaborative process.

Authenticity, Reliability and Validity of assessment

First, let’s identify factors that do not favor formal assessment of participation in online discussion groups.

Validity of assessment
For assessment of subject knowledge, compared to the traditional and accepted formats such as cloze tests, short answers or essays or even oral examinations, participation in discussion forums demand a different set of skills. Students not familiar or comfortable with such a format will clearly be at a disadvantage if they are assessed for subject knowledge based on their participation in online discussion groups. Further, the assessor does not have complete control over the flow and the scope of the discussion, affecting face validity.

Impact of assessment on student behavior in the discussion groups

Enforcing assessment does ensure participation but research indicates that it has some unfavorable impact on participation. Palmer et al. (2008) highlight how assessment of participation in online discussion groups is more like an imposed exercise where students are most likely to participate only to meet the assessment criteria. Not being the first one to post a comment or raise a pertinent question can impact student motivation levels. “Other attempts to integrate computer mediated discussion with traditional classroom or working environments have shown that when participation is required, many users invariably respond with anxiety and resistance” (Komsky 1991; Quinn et al. 1983) cited in Althaus (1996) (p.17).

Authenticity of assessment


  • Student & Teacher perceptions
    Interestingly, Gulikers, Bastiaens and Kirschner (2004) report that for authentic assessment, the social context is perceived as the least important by both teachers and students.

  • Learning styles
    Learners differ in terms of strategies they adopt. While some may prefer a collaborative approach, working with peers, others may choose to study independently.

  • Avenues for learning
    Online discussion groups not the only avenue for learning as in today’s environment as various other avenues are available:Course materials, Text books, journals or audio-visual materials, The internet, Search engines have made it very easy to access information from various websites, Wikis, free papers/white papers, blogs and Knowledge sharing and professional networks - coming from varied sources, experienced and influential people, these can make a significant contribution to learning.
  • Focus
    Though all students are studying the same subject, chances are that the individual focus areas could be very different, leading to limited participation on some topics and extensive on others (Murphy and Jerome, 2005).
  • Cultural differences
    In an online education environment, it is quite likely that students come from varied socio-cultural backgrounds. Not all students would be comfortable to open debate. Some would be afraid to express themselves with the fear of saying something inappropriate while others may be more vocal and claim leadership status in conversations.

Reliability of assessment
The reliability of such as assessment is impacted by a variety of factors:

Assessment of a student may be influenced by not just his/ her own participation but of how others respond to his/ her comments. Assessing an individual student’s effort in a group activity is difficult.
The sequence in which students will respond to a discussion topic is purely random and the first one to post a response to a new discussion topic is not necessarily the most knowledgeable about the topic. Further, a student taking time to reflect and give a more considered response may not have the opportunity to express himself appropriately.

  • Individual students representing a minority (in terms of race/ nationality/ language/ age) may either inadvertently or consciously get excluded from conversations
  • Determining originality of comments made in discussion groups is not easy and unlike in an essay, references may not always be provided
  • Inter-assessor consistency is also an issue to be considered and moderation means more resources.

Suggested action

Now, let’s identify conditions under which formal assessment of participation in online discussion groups is relevant.

Factors favoring assessment
Higher engagement
Research (Palmer et al. 2008) indicates that extent of participation in online discussion posts not only correlates but also impact success in summative assessments because of the higher engagement with the course materials. Ramos and Yudko (2006) differ on this, demonstrating that it is hits (frequency with which the content pages on the class site were viewed) and not posts that is a good predictor. What is clear is that the extent of engagement through the online environment has a positive impact on the summative assessments and discussion groups are a good way of managing that. For discussion boards to be successful, the topics need to be controversial and thought provoking, promoting higher-level thinking and active discussion (Kay, 2006).

Triangulation
Incorporating participation on online discussion groups as a component of assessment in addition to written tests, practical or oral examinations would allow for triangulation.
Using a combination of different assessment methods gives a better picture of students’ progress/ learning rather than a single end-of-the course assessment.

Planning for Assessment
Formative assessment of participation in online discussion groups is recommended as it is an essential element in the teaching – learning process: giving feedback to students and motivating them. It is more relevant in distance learning where isolation can have negative impact. The focus of assessment must be to ensure high consequential validity- a positive backwash effect on learning (Boud, 1995).

The challenge is in getting the students to engage in meaningful discussion, knowledge building without the fear of being assessed. Some suggestions are listed here:

  • Better scaffolding/ prompts
  • Better navigation of discussion posts
  • Search within discussion posts
  • Borrowing from the success of social networks: Email or message alerts on for new questions and responses to questions posted by a particular student or on a particular topic, enforcing students to update “status” at least once every week. For example: “Reading paper by Gulikers et al. on authentic assessment” or “Starting my assignment 1 on assessing participation…” This would be equivalent of attendance/ presence in a classroom.

To ensure validity of a summative assessment, the first step is to determine how critical the discussion is to the achievement of the learning objectives. Assessment of participation in online discussions should be considered only if student success in future professional or educational lives is believed to be impacted by their

  • Ability to collaborate and work with others (especially people from different countries/ cultures/ disciplines and in a virtual environment). With increasing globalization and dependence on web-based communication (emails, instant messengers and virtual meetings) there is merit in assessing it in certain situations.
  • Ability to think critically and reflect upon own learning and that shared by others

It should be undertaken only when:

  • The instructional design is in alignment with such an assessment
  • It does not involve high stakes where the motivation to compete is higher than that to collaborate- (e.g. weightage should not exceed 10%)
  • Valid and reliable criteria for assessment can be established and students are made aware of what is being assessed

Althaus (1996) suggests another approach that is worth considering:
On components of assessment that evaluate critical thinking, problem solving or knowledge construction, the students be given a choice between participation in an online discussion forum and writing a paper. However, making assessments between such groups of students comparable can be difficult. To be fair, the assessment rubric should primarily look for evidence of critical thinking, quality of response (knowledge, comprehension, application and analysis), claims made and evidence provided rather than extent of participation in terms of number, frequency or length of posts.

While it is important that the posts are concise, a limit on maximum number of words as an assessment criterion can be constraining. Hence, this should not be included as a criterion but students should be provided guidelines on the length of a post.

Ensuring adequacy of resources
The final but equally important check before deciding on assessment of participation in online discussion groups is the availability of adequate resources including for moderation. It is a serious decision because it is time intensive and demands immediacy. There is little relevance in a response that comes a week later as by then, the student who posted the question or a response has possibly moved to some other topic or others have responded, leading the conversation to a different level. The teacher not only has to assess student participation but also ensure timely scaffolding in terms of providing triggers that stimulate discussion and debate, moderation and closure of open issues.

References

Althaus, S. (1996, Aug 29 - Sep 1). Computer-Mediated Communication in the University Classroom: An experiment with online discussions. Paper presented at the Annual Meeting: American Political Science Association, San Francisco.

Boud, D. (1995). Assessment and learning: contradictory or complementary? In P. Knight (Ed.), Assessment for Learning in Higher Education (pp. 35-48): Kogan Page.

Craven-III, J. A., & Hogan, T. (2001). Assessing student participation in the classroom. Science Scope.

Edelstein, S., & Edwards, J. (2002). If You Build It, They Will Come: Building Learning Communities Through Threaded Discussions. Online Journal of Distance Learning Administration, V(I).

Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004, June 23-25). Perceptions of authentic assessment- Five dimensions of authenticity. Paper presented at the Second biannual joint Northumbria/EARLI SIG assessment, Bergen.

Han, S. Y., & Hill, J. R. (2007). Collaborate to learn, learn to collaborate: Examining the roles of context, community and cognition in an asynchronous discussion. Journal of Educational Computing Research, 36(1), 89-123.

Kay, R. H. (2006). Developing a comprehensive metric for assessing discussion board effectiveness. British Journal of Educational Technology, 37(5), 761–783.

McKey, P. (1999, 5-8 December). The total student experience. Paper presented at the ASCILITE 99, Australia.

Meyer, K. A. (2004). Evaluating online discussions: Four different frames of analysis. Journal of Asynchronous Learning Networks, 8(2), 101-114.

Murphy, E., & Jerome, T. (2005). Assessing students’ contributions to online asychronous discussions in university-level courses. E-Journal of Instructional Science and Technology (e-JIST), 8(1).

Palmer, S., Holt, D., & Bray, S. (2008). Does the discussion help? The impact of a formally assessed online discussion on final student results. British Journal of Educational Technology, 39(5), 847-858.

Ramos, C., & Yudko, E. (2006). “Hits” (not “Discussion Posts”) predict student success in online courses: A double cross-validation study. Computers & Education, 50(4), 1174–1182.

Roblyer, M. D., & Ekhaml, L. (2000, June 7-9). How interactive are YOUR distance Courses? A rubric for assessing interaction in distance learning. Paper presented at the DLA 2000, Callaway, Georgia.

No comments: