Saturday, July 4, 2009

Authentic Assessment of Marketing Projects

Background
For the assessment tool I have outlined, I have considered students in India undertaking a Post Graduate Diploma in Management- which is a two years- full-time course, each year comprising 3 terms. Such course is recognized by AICTE (All India Council for Technical Education)
The key motivation for students in undertaking this course is to acquire a diploma in business, preparing themselves for entrepreneurship or employment opportunities in Indian and multinational companies.

Assessment instrument
Rather than a single project for the entire term, 2 or 3 group projects, each with increasing levels of complexity, is more appropriate for project based learning and assessment.

Group Project 1
A group of students to identify a brand they will represent within a given product category and build a marketing strategy for it. Deliverables include a written report and a presentation for the group, an individual critical reflection and peer assessment. (Weight: 40%)

Group Project 2
A group of students to select a marketing problem from the options provided. Deliverables include a written report and a presentation for the group, an individual critical reflection and peer assessment. (Weight: 60%)

Instructions for Students

Each student will submit two projects as a member of a randomly assigned group.

1. Each project will include
  • Submission of written report not exceeding 3000 words excluding appendices such as tables or charts.
  • A 25 minute presentation to peers and the examiner. Presentation should be about 20-25 slides to ensure completion in the time allotted.
  • The report and the presentation must clearly identify the students who contributed to that project on the cover page
  • Presentation will be followed by a Q&A
Submissions are to be made using a web-based system/ email

2. After the completion of the project and presentation, each student must submit one page critical reflection that must:
  • Relate theoretical concepts covered in the classroom to applications on the project
  • Interpret the experiences associated with working in the group: how challenges were met and problems were resolved
  • How any of their assumptions have changed
3. Students must also submit self and peer assessment using a rubric that is made available to them, identifying their own contribution to the project and that of their team members.

4. As there is substantial weightage for the group projects, success is a function of teamwork and cooperation.

5. Timeline: An extension of not more than 2 days will be allowed to the groups to submit the project reports.

6. Access to software: Students are free to use software of their choice for spreadsheets, word processing and presentation so long as the files are compatible with Microsoft Office. If software is required for data analysis, students may access SPSS software installed on the computers in the college laboratory.

7. Internet access from college premises is permitted on all days including Sundays and public holidays from 7:00 am to 9:00 pm.

8. Students are not expected to use any external resources to conduct fieldwork or provide incentives to participants in any research. In case their project demands telephone interviews, visits to business locations or retail stores, students are expected to conduct it personally and at their own expense.

9. Students are free to raise questions pertaining to their projects in the classroom or via a discussion forum to ensure transparency and commonly asked questions can be answered efficiently.

10. Plagiarism of any kind will be strictly dealt with. Disciplinary action may be taken at anytime even after the results have been reported or the degree has been awarded.

Instructions to the examiner
  • Projects must be assigned to groups of 4/5 students each.
  • Projects must be authentic, reflecting issues that practitioners of marketing face in the current environment
  • The projects must commence after the key marketing concepts are discussed in the classroom.
  • Each member of the team must be involved in either making the presentation or fielding the questions from the examiner. The examiner will randomly assign 2 group members to do the presentation while the rest will respond to the questions at the end of the presentation.
  • All members of the group will receive the same number of marks for the presentation and the report. However, the overall grading will be influenced by peer assessment and critical reflection.
  • Other students attending the presentation may ask questions but these have to be routed through and moderated by the examiner who must ensure that questions asked are appropriate and relevant.
  • Examiner must look for consistency in the peer assessments done by group members. Any doubts on individual contribution should be clarified prior to assessment.
  • Samples of projects and critical reflection are to be made available to the students.
  • The assessment rubrics are to be shared with the students at the beginning of the term.
  • If a report is suspected not to be original work and/ or references to content that is reproduced from elsewhere are missing, it must be checked for plagiarism. Marks may be deducted if a match of 25-40% is identified. If it exceeds 40%, the group receives no marks for the report and the incident is reported in the student records.
Assessment Feedback
Students must be given feedback on
  • Theoretical concepts, analytical techniques not taken into account- if any
  • Depth of research conducted/ needs identified to understand the problem
  • Feasibility of implementation of suggested solutions
  • Improvements that can be made to the report writing, presentation skills and managing the Q&A sessions
  • Collaborative efforts of the group

Project Assessment Rubric

Educational justification for authentic assessment
There is not much focus on formative assessment because of lack of time and continuity as the various subjects are taught by specialists and the contact may not extend beyond a trimester.

To be adequate, comprehensive and authentic, assessment should consist of tasks which are representative of the ‘knowledge, skills and strategies needed for the activity or domain being tested’ (Fredericksen & Collins 1989) cited in Pitman (1999). The holistic, competency based assessment is designed keeping in mind the learning objectives and expected learning outcomes.

Learning Objectives
The focus is on acquiring knowledge about the different components of marketing such as product offering and life cycle, communication, pricing and distribution in the context of current business environment in India with emphasis on case studies and projects.

Expected learning outcomes
On completion of this course, students will have an understanding of the marketing management process. Students should be able to personally construct and evaluate a marketing plan created by others, ascertain reliability of data sources used for consumer understanding and market intelligence, justify the plan suggested and define criteria for measurement of success of the proposed marketing strategy.


Ensuring authenticity
As we are dealing with students who have already completed 3 or 4 years of University level education and possibly have some work experience, the focus is on providing learning outcomes that are relevant to employability and entrepreneurship. Brogan (2006) highlights how project based learning in groups is appropriate from the employability perspective.

To ensure a high degree of authenticity, project based learning (PBL) and assessment is recommended for a substantial weight of assessments as it

  • Allows students to create a product and demonstrate learning
  • Reflects real work situation where
  • They would be expected to do both-collaborate and demonstrate individual competence
  • Present their work to others to seek support or approval
  • Is relevant in an adult learning
  • Focuses on improving problem solving capabilities

Authentic assessment has a positive impact on student motivation to learn and students are able to see its relevance to their future professional lives. (Gulikers, Bastiaens & Kirschner, 2004).

Most teaching staff at such institutions being from the marketing industry, are well-equipped to construct and scaffold real-life projects, provide real life examples in classroom discussions on how marketing plans were created and implemented, what were the hurdles- in terms of information and resources and how those were resolved and the end results. Effectively- projects become central to both learning and assessment and students are able to connect between theory and practice. On some occasions, there are opportunities to even implement the projects in real-life at least in part, if not in entirety.

The group formation is also randomized as professional life does not always offer you opportunities to work with like-minded people. This suggestion of randomizing groups comes from professionals who experienced that inability to work with people of different attitudes and values can have a negative impact on the success of a project.

The two projects are sequenced such that the first one is well-structured and is limited to one product category. While this limits the choice, students will be able to appreciate how they view the brand they represent and how their competition views them. The second project is somewhat ill-structured and also allows some flexibility in terms of choice so that students can choose a project that fits best with their interests or employment objectives.

Grading with rubrics within a marketing course helps students clearly identify the different aspects of project work and is what is expected of them. (Amantea, 2004)

Critical reflection

According to Catterall, Maclaran, and Stevens (2002), in group projects, students need to reflect upon the impact of interpersonal factors, differing agendas of team members, compromises, leadership battles, hurdles encountered and resolved. They also need to identify themselves in a wider economic, political, cultural and social environment.

The rationale for inclusion of critical reflection to the project work is four-fold:
  • Transfer of learning from the classroom to the real world
  • Bring students to realization that decision making and events occur in a socio-political context and that individual perspectives can be very different
  • Making students aware of why they perceive, think, feel or act in the way they do
  • Assess their own contribution to the group projects

Validity

Face validity
The project based learning and assessment has face validity because the objective is to assess students on their ability to apply concepts covered in the classroom on real-life marketing problems. Further, in their professional lives, the management students would be expected to collaborate and work with others. The assessment is structured using a clearly defined project brief, timeline and assessment rubric, yet giving some autonomy to students.

Content validity
The projects are defined by experienced marketing professionals, ensuring that the content is relevant and current. They cover the expected learning outcomes as students will be tested on their ability to create and evaluate marketing strategy and their ability to communicate the same via a report and presentation, as they would, in a working environment.

Construct validity
In an authentic assessment construct validity is very important. That is, the assessment task must assess problems encountered in the real world using criteria that would also be encountered in real-life (Gulikers, Bastiaens & Kirschner, 2004).

The rubric for project assessment helps assess the students not only on the final product but also the process– the research needs identified, data gathered, understanding of the consumer and the competitive environment demonstrated.

Predictive validity
As the projects reflect real business problems, the assessment does reflect future performance in a working environment (Moskal and Leydens, 2000).

"Students taught with a more progressive, open, project-based model developed more flexible and useful forms of knowledge and were able to use this knowledge in a range of settings." (Boaler, 1998a) cited in Thomas (2000)

The project rubric also covers graduate attributes such as presentation skills and managing questions.

Reliability
The projects are assessed using a rubric that takes into account the report, presentation and the critical reflection. Under each of these areas are specific criteria clearly defined to make assessment as reliable as possible and minimize subjectivity. While subjectivity cannot be fully eliminated, moderation is recommended for the project reports to ensure inter-examiner consistency.

The rubric may also be pre-tested for consistency with 3-5 examiners who are given the same project reports and rubric before the actual assessment is commenced.

The student/ group performance across the two projects must be compared for consistency and to see if differences can be explained. In an ideal situation, we should expect an improvement in performance from the first to the second project.

Fairness
Adequate choice is given for the second project so that groups can identify a context that is of interest to them. The rubric is shared with the students at the beginning of the term.

Allowing students to make use of open source/ free software and access to resources such as internet and the library ensures fairness.

By assigning a minimum score for critical reflection and peer assessment, we can ensure that a student who has contributed more to the project or demonstrates learning/ application of theory to practice fares better on the assessment. It is important to allow for individual’s accountability to the group (Newhouse-Maiden and de Jong, 2004)

The projects are checked for plagiarism minimizing instances of cheating.

Reporting and Feedback

Meaningful feedback to the students at the end of every assessment is vital as the goal is not just completion of the project but to ensure that relevant learning happens along the way. The project rubric allows for identification of specific issues / subject areas to provide feedback. E.g.” Student demonstrates a deep understanding of the competitive environment and can recommend appropriate marketing strategy. He/ she needs to improve on managing questions from audience. He/ She must focus on improving understanding of pricing strategy”

The weight for different components of assessment is increased gradually providing the learner an opportunity to reflect and better their performance.

The assessment will also be meaningful to potential employers who may choose to review the projects.

Sample Project briefs

Marketing Strategy Project Brief: Project 1 (40%)
Choose a brand in the detergent category from the list below
  • Ariel
  • Tide
  • Surf Excel
  • Nirma
  • Henko
  • Mr.White
  • Bang
  • Vanish
  • Wheel
  • Rin

No two groups can choose the same brand.
  • Demonstrate an understanding of the brand and it’s competitive environment
  • What do you think are the key issues for the brand today, at what stage is it in the product life cycle
  • Develop a marketing strategy for 2010, highlighting the changes that you would do to the marketing mix- if any (Product offering, Distribution, Pricing and Advertising & Promotion)
The group must submit the project report to within 15 working days followed by a presentation in the subsequent week. After the completion of the project and presentation, each student must submit one page critical reflection that covers:

  • A brief description of the key tasks he/ she performed on the project
  • Relate theoretical concepts covered in the classroom/ study material to applications on the project
  • Interpret the experiences associated with working in the group: how challenges were met and problems were resolved
  • Personal goals for improvement – if any

Sample Project Brief: Project 2 (60%)
Select one of the options below.

Option 1
Identify a product or a service category that currently does not exist in India. Develop a strategy to launch it in India.
  • Describe the product or service category
  • Demonstrate the relevance of the product or the service to the Indian market
  • Identify and describe the target segment
  • Strategy for launch of the product including advertising and promotion
  • Sales strategy
  • Forecast of sales/ subscriptions for the next 3 years and provide rationale for the forecast
  • Identify metrics that will determine a successful launch

Option 2
You are employed with a leading telecom operator and seek to increase revenue by offering value added services targeted to rural India.
  • Describe the service offering
  • Describe the target segment
  • Demonstrate the relevance of the product or the service to the rural Indian market
  • Strategy for launch of the product including advertising and promotion
  • Strategy to build the subscriber base
  • Forecast of subscriptions for the next 3 years and provide rationale for the forecast
  • Identify metrics that will determine a successful launch

Option 3
You are responsible for marketing a premium airline brand. With the economic downturn, your company decides to launch a budget airline with fares that are 40% lower than that of the existing brand. You are now responsible for marketing both the brands but have to ensure that cannibalization of the premium airline brand is to the minimum. How would you differentiate between the two in terms of:
  • Target segments
  • Service offerings
  • Pricing
  • Advertising and promotion strategy
  • Identify metrics that will determine the success of the marketing plan.

The group must submit the project report to within 20 working days followed by a presentation in the subsequent week. After the completion of the project and presentation, each student must submit one page critical reflection that covers:
  • A brief description of the key tasks he/ she performed on the project
  • Relate theoretical concepts covered in the classroom/ study material to applications on the project
  • Interpret the experiences associated with working in the group: how challenges were met and problems were resolved
  • Personal goals for improvement – if any

References

Amantea, C. A. (2004). Using Rubrics To Create And Evaluate Student Projects In A Marketing Course. Journal of College Teaching & Learning, 1(4), 23-28.

Brogan, M. (2006). What you do first is get them into groups": project-based learning and teaching of employability skills. Fine Print, 29(2), 11-16.

Catterall, M., Maclaran, P., & Stevens, L. (2002). Critical reflection in the marketing curriculum. Journal of Marketing Education, 24(3), 184-192.

Forehand, M. (2005). Bloom's taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching and technology. Retrieved 27 May 2009, from http://projects.coe.uga.edu/epltt/

Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004, June 23-25). Perceptions of authentic assessment- Five dimensions of authenticity. Paper presented at the Second biannual joint Northumbria/EARLI SIG assessment, Bergen.

Kotler, P. (1999). Marketing Management – The Millennium Edition (10th Edition ed.): Prentice Hall of India Private Limited.

Newhouse-Maiden, L., & Jong, T. d. (2004). Assessment for learning: Some insights from collaboratively constructing a rubric with post graduate education students Paper presented at the 13th Annual Teaching Learning Forum, 9-10 Februrary 2004, Perth: Murdoch University.

Pitman, J. A., O’Brien, J. E., & McCollow, J. E. (1999, May 1999). High-Quality Assessment: We are what we believe and do. Paper presented at the IAEA Conference, Bled, Slovenia.

Thomas, J. W. (2000). A review of research on Project-Based Learning. Retrieved May 10, 2009, from http://www.bie.org/index.php/site/RE/pbl_research/29

Thomas, R. E. (1997). Problem-based learning: measurable outcomes. Medical Education, 31, 320-329.

Williams, J. B., & Wong, A. (2009). The efficacy of final examinations: A comparative study of closed-book, invigilated exams and open-book, open-web exams. British Journal of Educational Technology, 40(2), 227–236.

Wolf, P. (n.d.). Transformational learning and critical reflection. Retrieved 17 May, 2009, from http://www.wlu.ca/documents/20396/Critical_reflection_-_handout.pdf

Friday, July 3, 2009

Assessing participation in online discussion groups

Background
How does the formal online education environment differ from a traditional classroom environment? Should participation in online discussion groups be always assessed as a rule? Or should it be assessed only in certain situations? If participation in online discussion forums is assessed, what should we measure? This paper seeks to address these questions.

The comments here are in the context of education that is delivered entirely online and dealing with students with over 12 years of education. This paper does not touch upon the issue of identity which is inherent in any exclusively online learning environment.

Comparing the classroom and the online learning environment

The teacher, being physically present in the classroom, is in a position to know if students are following what is being discussed and assess the participation of individual students. While classroom participation is sometimes assessed, it is not mandatory and usually not a part of a summative- end of the year assessment. Formative assessment of classroom participation can provide valuable feedback to the student leading to a positive impact on the end of the year assessment.

The formal online learning environment is different. The teacher and participating students have very little or no face to face contact. The online discussion groups are one way of knowing the extent of student understanding (without having to wait until the assignments are submitted), providing scaffolding and progressively guiding students towards the expected learning outcomes. Given the commitment to flexibility of study (and to some extent assessment) schedule that online learning models offer and geographical, cultural and time differences, how do you encourage participation in online discussion groups (especially in an asynchronous discussion)? Assigning a certain proportion of marks for participation seems to be a popular strategy (Palmer, Holt and Bray, 2008). However, is it relevant for the end of the course/ summative assessment? Is it authentic, reflecting real-life conditions that have meaning for the learner? Is such an assessment valid and reliable?

Relevance of Online discussion groups

The interaction between student peers and of students with the teachers is vital to learning. At a primary level, the online discussion groups serve as a means for teachers to determine student progress in order to make any formative assessment and help them know when to move on to the next topic.

More importantly, research indicates that asynchronous discussions in an online learning environment contribute to learning. Further, learning that occurs in such a community is situated and the learning process and outcomes are shared across the learners (Han and Hill, 2007).

It is not delivery of content via online that makes the difference- it’s the scaffolding and the overall student experience (McKey, 2000). A testament to this would be in the fact that MIT (Massachusetts Institute of Technology) offers most of its course content online and free! In the absence of a classroom, the online discussion area becomes a central place for testing of ideas, expression of views and debate. It provides students an opportunity to reflect on what they have learned and express themselves.

“Active learning is linked to students’ ability to apply knowledge to new contexts” (Bransford, Brown, and Cocking, 2000) cited in Craven and Hogan (2001) (p.37).

The sharing of information and different perspectives in the online discussion groups leading to co-construction of contextual knowledge that occurs is what makes it valuable. They also provide a permanent record of student contribution to the collaborative process.

Authenticity, Reliability and Validity of assessment

First, let’s identify factors that do not favor formal assessment of participation in online discussion groups.

Validity of assessment
For assessment of subject knowledge, compared to the traditional and accepted formats such as cloze tests, short answers or essays or even oral examinations, participation in discussion forums demand a different set of skills. Students not familiar or comfortable with such a format will clearly be at a disadvantage if they are assessed for subject knowledge based on their participation in online discussion groups. Further, the assessor does not have complete control over the flow and the scope of the discussion, affecting face validity.

Impact of assessment on student behavior in the discussion groups

Enforcing assessment does ensure participation but research indicates that it has some unfavorable impact on participation. Palmer et al. (2008) highlight how assessment of participation in online discussion groups is more like an imposed exercise where students are most likely to participate only to meet the assessment criteria. Not being the first one to post a comment or raise a pertinent question can impact student motivation levels. “Other attempts to integrate computer mediated discussion with traditional classroom or working environments have shown that when participation is required, many users invariably respond with anxiety and resistance” (Komsky 1991; Quinn et al. 1983) cited in Althaus (1996) (p.17).

Authenticity of assessment


  • Student & Teacher perceptions
    Interestingly, Gulikers, Bastiaens and Kirschner (2004) report that for authentic assessment, the social context is perceived as the least important by both teachers and students.

  • Learning styles
    Learners differ in terms of strategies they adopt. While some may prefer a collaborative approach, working with peers, others may choose to study independently.

  • Avenues for learning
    Online discussion groups not the only avenue for learning as in today’s environment as various other avenues are available:Course materials, Text books, journals or audio-visual materials, The internet, Search engines have made it very easy to access information from various websites, Wikis, free papers/white papers, blogs and Knowledge sharing and professional networks - coming from varied sources, experienced and influential people, these can make a significant contribution to learning.
  • Focus
    Though all students are studying the same subject, chances are that the individual focus areas could be very different, leading to limited participation on some topics and extensive on others (Murphy and Jerome, 2005).
  • Cultural differences
    In an online education environment, it is quite likely that students come from varied socio-cultural backgrounds. Not all students would be comfortable to open debate. Some would be afraid to express themselves with the fear of saying something inappropriate while others may be more vocal and claim leadership status in conversations.

Reliability of assessment
The reliability of such as assessment is impacted by a variety of factors:

Assessment of a student may be influenced by not just his/ her own participation but of how others respond to his/ her comments. Assessing an individual student’s effort in a group activity is difficult.
The sequence in which students will respond to a discussion topic is purely random and the first one to post a response to a new discussion topic is not necessarily the most knowledgeable about the topic. Further, a student taking time to reflect and give a more considered response may not have the opportunity to express himself appropriately.

  • Individual students representing a minority (in terms of race/ nationality/ language/ age) may either inadvertently or consciously get excluded from conversations
  • Determining originality of comments made in discussion groups is not easy and unlike in an essay, references may not always be provided
  • Inter-assessor consistency is also an issue to be considered and moderation means more resources.

Suggested action

Now, let’s identify conditions under which formal assessment of participation in online discussion groups is relevant.

Factors favoring assessment
Higher engagement
Research (Palmer et al. 2008) indicates that extent of participation in online discussion posts not only correlates but also impact success in summative assessments because of the higher engagement with the course materials. Ramos and Yudko (2006) differ on this, demonstrating that it is hits (frequency with which the content pages on the class site were viewed) and not posts that is a good predictor. What is clear is that the extent of engagement through the online environment has a positive impact on the summative assessments and discussion groups are a good way of managing that. For discussion boards to be successful, the topics need to be controversial and thought provoking, promoting higher-level thinking and active discussion (Kay, 2006).

Triangulation
Incorporating participation on online discussion groups as a component of assessment in addition to written tests, practical or oral examinations would allow for triangulation.
Using a combination of different assessment methods gives a better picture of students’ progress/ learning rather than a single end-of-the course assessment.

Planning for Assessment
Formative assessment of participation in online discussion groups is recommended as it is an essential element in the teaching – learning process: giving feedback to students and motivating them. It is more relevant in distance learning where isolation can have negative impact. The focus of assessment must be to ensure high consequential validity- a positive backwash effect on learning (Boud, 1995).

The challenge is in getting the students to engage in meaningful discussion, knowledge building without the fear of being assessed. Some suggestions are listed here:

  • Better scaffolding/ prompts
  • Better navigation of discussion posts
  • Search within discussion posts
  • Borrowing from the success of social networks: Email or message alerts on for new questions and responses to questions posted by a particular student or on a particular topic, enforcing students to update “status” at least once every week. For example: “Reading paper by Gulikers et al. on authentic assessment” or “Starting my assignment 1 on assessing participation…” This would be equivalent of attendance/ presence in a classroom.

To ensure validity of a summative assessment, the first step is to determine how critical the discussion is to the achievement of the learning objectives. Assessment of participation in online discussions should be considered only if student success in future professional or educational lives is believed to be impacted by their

  • Ability to collaborate and work with others (especially people from different countries/ cultures/ disciplines and in a virtual environment). With increasing globalization and dependence on web-based communication (emails, instant messengers and virtual meetings) there is merit in assessing it in certain situations.
  • Ability to think critically and reflect upon own learning and that shared by others

It should be undertaken only when:

  • The instructional design is in alignment with such an assessment
  • It does not involve high stakes where the motivation to compete is higher than that to collaborate- (e.g. weightage should not exceed 10%)
  • Valid and reliable criteria for assessment can be established and students are made aware of what is being assessed

Althaus (1996) suggests another approach that is worth considering:
On components of assessment that evaluate critical thinking, problem solving or knowledge construction, the students be given a choice between participation in an online discussion forum and writing a paper. However, making assessments between such groups of students comparable can be difficult. To be fair, the assessment rubric should primarily look for evidence of critical thinking, quality of response (knowledge, comprehension, application and analysis), claims made and evidence provided rather than extent of participation in terms of number, frequency or length of posts.

While it is important that the posts are concise, a limit on maximum number of words as an assessment criterion can be constraining. Hence, this should not be included as a criterion but students should be provided guidelines on the length of a post.

Ensuring adequacy of resources
The final but equally important check before deciding on assessment of participation in online discussion groups is the availability of adequate resources including for moderation. It is a serious decision because it is time intensive and demands immediacy. There is little relevance in a response that comes a week later as by then, the student who posted the question or a response has possibly moved to some other topic or others have responded, leading the conversation to a different level. The teacher not only has to assess student participation but also ensure timely scaffolding in terms of providing triggers that stimulate discussion and debate, moderation and closure of open issues.

References

Althaus, S. (1996, Aug 29 - Sep 1). Computer-Mediated Communication in the University Classroom: An experiment with online discussions. Paper presented at the Annual Meeting: American Political Science Association, San Francisco.

Boud, D. (1995). Assessment and learning: contradictory or complementary? In P. Knight (Ed.), Assessment for Learning in Higher Education (pp. 35-48): Kogan Page.

Craven-III, J. A., & Hogan, T. (2001). Assessing student participation in the classroom. Science Scope.

Edelstein, S., & Edwards, J. (2002). If You Build It, They Will Come: Building Learning Communities Through Threaded Discussions. Online Journal of Distance Learning Administration, V(I).

Gulikers, J. T. M., Bastiaens, T. J., & Kirschner, P. A. (2004, June 23-25). Perceptions of authentic assessment- Five dimensions of authenticity. Paper presented at the Second biannual joint Northumbria/EARLI SIG assessment, Bergen.

Han, S. Y., & Hill, J. R. (2007). Collaborate to learn, learn to collaborate: Examining the roles of context, community and cognition in an asynchronous discussion. Journal of Educational Computing Research, 36(1), 89-123.

Kay, R. H. (2006). Developing a comprehensive metric for assessing discussion board effectiveness. British Journal of Educational Technology, 37(5), 761–783.

McKey, P. (1999, 5-8 December). The total student experience. Paper presented at the ASCILITE 99, Australia.

Meyer, K. A. (2004). Evaluating online discussions: Four different frames of analysis. Journal of Asynchronous Learning Networks, 8(2), 101-114.

Murphy, E., & Jerome, T. (2005). Assessing students’ contributions to online asychronous discussions in university-level courses. E-Journal of Instructional Science and Technology (e-JIST), 8(1).

Palmer, S., Holt, D., & Bray, S. (2008). Does the discussion help? The impact of a formally assessed online discussion on final student results. British Journal of Educational Technology, 39(5), 847-858.

Ramos, C., & Yudko, E. (2006). “Hits” (not “Discussion Posts”) predict student success in online courses: A double cross-validation study. Computers & Education, 50(4), 1174–1182.

Roblyer, M. D., & Ekhaml, L. (2000, June 7-9). How interactive are YOUR distance Courses? A rubric for assessing interaction in distance learning. Paper presented at the DLA 2000, Callaway, Georgia.