Evaluation – JISC Trial

Following overwhelming interest from members of the JISC community, we recently ran a second iteration of Designing for Collaborative Learning with 42 active participants.  In line with previous feedback, a number of minor alterations were made before the start of this trial – namely with regards to the clarity of theoretical background, reduced group sizes, and the addition of optional additional tasks designed to encourage participants to practice their moderation skills.

Overall, the feedback from this second iteration has been very positive, with the main complaint (still from only two participants) being that the course is targetted more towards beginners than those who already have some experience in designing these types of online collaborative activities.  It was interesting to see that, despite the type of audience that we had for this trial (e-learning professionals, senior lecturers etc.) it was still a struggle to pace the e-tivities in the most beneficial way – i.e. by ensuring that all members of the group were contributing to the e-tivity at the same time rather than posting their contributions all at once and considering their work to be complete.  In attempt to combat this in this case, we decided to release each e-tivity only once the previous one had come to a close, so that participants could not rush ahead and leave their peers behind with no opportunity for discussion.  While this was effective in this endeavour, feedback suggests that a number of participants (2/17) would have preferred to have been able to access all of the e-tivities earlier as they found themselves in a position where they could not contribute to one or more of the e-tivities at the appropriate time.  As we have found, it has been difficult to strike the right balance, but I would still advocate the forced pacing of the e-tivities as this means that participants can feel confident that their peers are available during the same allotted time to engage in discussion.  Drawing on my own experiences of online courses, seeing that a peer has made their contribution some time before I have (i.e. before it was expected) is off-putting and leaves the impression that they do not intend to look back at the posts of others; this goes against the discursive nature of these types of activities and their benefits are not achieved.

The only other comments that could perhaps be taken on board in future iterations are: 1) a request for the use of blogs as part of the course, and 2) two requests for an activity which requires participants to work collaboratively towards achieving – e-tivity 4 was in fact designed in this way, as participants were required to contribute to a collection of example ‘difficult’ discussion board posts and work together to write and agree on suitable responses.  Upon reflection, this aim could have been made more transparent in the structuring of the e-tivity.

In light of this feedback, a number of minor changes were made and the course packaged as an open resource available for download from the project homepage:

https://swww2.le.ac.uk/departments/beyond-distance-research-alliance/projects/p2.0ple/

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

 

Evaluation – UoL Trial

Well, the results to our survey have started to come in – following the close of the moderated section of the course on Friday night – and it seems that we are beginning to draw out some areas for improvement.  So far we have been able to highlight the following points for consideration:

  • Timings given for e-tivities are perhaps too generous (could maybe reduce these/quote a range)
  • Offer a lower and upper limit for word-counts – participants would have liked to have been able to write more
  • Offer more time for participation in self-study sections
  • Improve the clarity of theoretical background across the board
  • Offer optional ‘more-advanced’ activities to ancourage and facilitate for more experienced participants
  • One participant commented that they would have preferred to have been in smaller groups for some of the exercises.
  • Vary the content a little

Own observations:

  • A number of participants did not follow the pace of the course as recommended, and as a result ‘completed’ the e-tivities in one go and as a result were not able to participate in the discussions that evolved from their posts, or the posts of others as intended.  While this was handled appropriately by the E-moderator, it may be possible to reduce the likelihood of this happening again by highlighting the dialogic nature of the activities as part of the course timetable.

Observations from previous participants:

  • E-moderator section could include more/clearer advice for encouraging participation as well as moderating the contributions of those who do participate.

We have only received responses from around 20% of the participants so far, so this picture will continue to grow.  However, we do certainly have an indication of some of the areas that could be improved.

Popularity of P2.0PLE

Well, the course is currently undergoing local trials, and the external registration period is due to close tomorrow – and boy have we seen some interest there!  At the outset, we hoped to enlist roughly 20 members of the JISC community to take part in the course over a two week period.  Following a week and a half of registrations, we are amazed to say that inboxes are overflowing with responses – we even have a waiting list!!  As it stands, we plan to run three groups of 17 (51) through the course at the same time (moderating capacity permitting), with 14 hopefuls sitting on the bench, praying for a spot to open up.

It is more than encouraging to know that the time we have devoted to developing this resource has been well spent, since it is clearly in demand.  However, it is also clear that this type of course is lacking in-house at a great many institutions.  We really hope that, by making this course openly available, we can at fill at least some of this gap, and pass on the wealth of knowledge that so far has not been used to its full potential.

Transformations

During the re-purposing stages of this project, we have been working to re-use and/or re-shape a number of JISC project outputs and use them to enhance an online professional development course that is going to be running here at UoL: E-Moderating at Leicester.  In the early stages of this process, however, we found that the vast majority of the resources that were catalogued as part of this project fell outside of the scope of ‘e-moderating’.  This presented us with a challenge: how do we showcase these resources to our colleagues so that they are likely to impact on practice at a time when they could be the most useful? 

Working closely with our Course Design and Development Team, here at UoL, we decided to take the opportunity to expand the scope of the course to cover the entire process of embedding collaborative activities: Designing, moderating, and assessing.  We believe that this will allow us to open participation to a wider population, and facilitate an increase in the uptake of online collaborative activities by enabling those responsible to start by working through the design process, then go on to learn how to moderate their activities as they run, and then to assess student contributions as they close. 

By designing our output in this way, however, we raised a number of questions regarding openness and moderation provision.  Considering, first of all, moderation, we were faced with the question of how to expand the reach of the course – both in terms of participation and content – without placing too high a demand on the time of the moderator.  So, to allow for this, we have designed the initial iteration to forefront ‘e-moderating’ as a compulsary module, while ‘designing’ and ‘assessment’ remain self-moderated.  In this sense, the additional two modules are optional for those participants who would benefit from some or all of their contents/exercises, but all of the activities are self-directed and either not assessed or self-evaluated.

The second question we had concerned our options for making the course available as an open resource for the rest of the JISC community, and (hopefully) beyond.  For the meantime, we have opted to create and run the course through CourseSites ; the benefits being that registration is not restricted to a single institution, and that the interface matches the version of Blackboard we are currently running here at UoL – meaning that participants would not experience difficulty in moving from one platform to another.  While we will be able to export the site’s contents into a format that can be imported into Blackboard, we are still currently exploring other options for our final output format.

Building Collaborative Understanding

Following a HEA Workshop that was run here at the University of Leicester last week, my excitement for the potential benefits of collaborative learning activities – both for students and tutors – has definitely been fuelled.  In the resource review/repackaging that I have been conducting as part of this project so far, I have to admit that I have inadvertently focused on the potential of Web 2.0 technologies in facilitating collaborative production or task/response exercises.  I had never really considered the ways in which these tools could enhance the ‘meaning-making’ and ‘understanding-building’ processes that take place during reading exercises, but now that the idea has been planted I wonder why I hadn’t considered it before!

Thursday’s workshop – entitled #tagginganna – talked and walked us through the process of group reading, tagging, and annotation using two online tools: Digress.it (a plugin for WordPress.com), and eMargin (a tool developed by the Research and Development Unit for English Studies at Birmingham City University).  In the first half of the day, however, two tech-free exercises introduced us to the concept of collective meaning-making, as we were asked to read and annotate two texts at different levels of detail; i.e. word-level and paragraph-level, which reflect the options available through the online tools mentioned above.  It really was enlightening to experience this process first-hand, to observe how my own understanding of the meanings behind each of the texts changed and grew as I worked through the exercises and read the comments that had been left by other members of the group.  Not only did this give each of us the chance to experience the texts from other perspectives, but I found that I was more inclined to search for evidence of my own interpretation in discussions where conflicting meanings had been suggested – a naturally occurring outcome that every tutor hopes for. 

In the afternoon Dr Mark Rawlinson, Reader in English Literature here at UoL, outlined the work he and his colleagues (Stuart Johnson, and Alex Moseley, also of UoL) have been doing using Digress.it and eMargin as part of first, third and fourth year English Literature modules in which students were asked to:

  • “Identifying which are the significant elements of the text (developing skills of attention and recognition, but also sharing the fruits of analysis and discovery). In this case, the more readers there are, the more gets noticed (as with the seminar itself).
  • Making implicit meanings in the text explicit (developing skills of interpretation, but also sharing the products of interpretive activity, and leading to higher synthesis).
  • Debating the relationships between multiple readings of the same elements from different points of view.
  • Labelling (or tagging) elements which can be taken up into larger scale analysis of long narrative texts (developing research and information handling skills to prepare for the production of substantial essays)” (#tagginganna, Background)

What was particularly interesting about these pilots was the fact that the tagging exercises were not assessed or in fact compulsory, but highly recommended to all students.  In spite of this lack of direct incentive for the students, in terms of tangible rewards, Dr Rawlinson reports impressive active participation rates of around 50%. 

We were also lucky enough to have been given a demonstration of both tools before the close of the workshop, which opened my eyes to their potential uses – both for individual and collaborative meaning-making and organisation.  In the case of eMargin in particular, I can see it’s value for more flexible collaborative annotation tasks; where Digress.it naturally focuses all comments at the paragraph-level, eMargin is more flexible in that it allows users to highlight the section of text that they would like to comment on.  The beauty of both tools, however, is that all comments and their replies are aligned with and linked to the sections of the text to which they refer, making it easy for students to follow the flow of ‘conversation’ and return to view/further contribute to responses easily.  This layout also makes it easy for tutors to monitor the development of the ‘conversation’, add further comments to draw attention to specific comments/parts of the text, and to review comments by student in terms of frequency, content and tags; interestingly, eMargin also allows users to print a copy of the text with all comments as footnotes, which would make archiving and review much easier for both tutors and students (not sure if Digress.it also offers this option, but it would be worth investigating if you are considering using either of these tools). 

Drawing on the key themes that were identified in my previous post, one important factor that might help you to choose between the two tools relates to their privacy settings.  While Digress.it is linked directly with WordPress, and so carries the same privacy/openness settings, eMargin is an openly accessible tool, but requires login access for all users; i.e. students’ comments are not ‘published’ in the sense that it would be open to contribution and/or scrutiny by the general public, which does carry benefits for the quality of posts.  Dr Rawlinson comments, however, that perhaps these trials were so successful because contributions were made as part of a risk-free environment in which contributions were not being judged by assessors or indeed the wider community.

If you would like to view the presentation slides for eMargin, they are available through the project blog written by Andrew Kehoe (Project Manager), and Matt Gee.  This is definitely a project to watch, as they continue to develop and improve the software with the help of recently awarded JISC funding.

Key Themes from the Literature …

Following a review of the literature around the topic of collaborative learning, we have been able to highlight a number of important questions that have been raised about contributing factors to the success of particular web 2.0 technologies as they have been adopted for use in HE.  These considerations will help to inform the evaluation that is being conducted of existing JISC and University of Leicester (UoL) collaborative learning resources, and the creation of any additional materials:

1. Scaffolding vs. Autonomy

“While some studies support the claim that an excess of freedom in the way collaborative tasks are proposed may fail to engage all team members in productive interactions (Hewitt, 2005; Bell, 2004, Lui & Tsai. 2008; all cited in Demetiadis et al., 2009), others maintain that there is a danger also in exceeding in scaffolding students, that is “over-scripting” collaborative learning activities (Dillenbourg, 2002; Dillenbourg, 2004).  According to these authors, too much guidance may hinder learners’ creativity, flexibility and ability to self-regulate, therefore jeopardizing the co-construction of knowledge and ultimately causing a loss of effectiveness of the learning process (Dillenbourg & Jermann, 2007)” (Pozzi & Persico, 2011: 2).

Designers of student-centred constructivist learning environments – as blogs, wikis, and discussion boards have the potential to become – are expected to “[e]ncourage ownership and voice in the learning process” (Honebein, 1996: 12) by allowing the students to construct their own learning path and identify their own goals.  This is where the benefit of using such tools is deemed to lie; in their ability to engage learners in aspects of a topic that they are more interested in by offering them a greater level of control over the direction of their learning, and the shape of its outcome(s).  As Fountain, (2005) reports: “wikis work most effectively when students can assert meaningful autonomy over the process” .  Pozzi & Persico (2011) have concluded, however, that online activities require a “careful tuning of Task, Teams and Time” in order to most successfully encourage participation and steer students towards a shared goal.  When given complete freedom, students reportedly struggled to maintain a focus on deadlines or individual responsibilities within the group, leading to varying degrees of contribution and a lack of coordination.

2. Ownership vs. Anonymity

Within an educational context, “rewards (grades, bursaries, grants, publications and hirings) are still typically based on individual contributions and efforts” (Fountain, 2005), and so collaborative learning tasks conducted in an online environment may cause issues in grading and moderation when individual contributors are not so easily identifiable – in a wiki, for example.  While it is possible to set-up a wiki so that contributing authors can be made visible, and blog and discussion posts are clearly identifiable, there are advantages to anonymity that should be considered.  “Garcia & Steinmueller (2003) outline three potential advantages: 1) an intensification and diversification of non-ownership/non-proprietary models; 2) an emergence of self/other identification hybrids; and 3) the proliferation of consumer/producer horizontal assemblages, reflecting the multi-authored character or information goods produced through collaborations” (cf. Fountain, 2005).  It is also possible that by allowing contributors to remain anonymous – or to use a pseudonym – a greater level of confidence could be instilled  since it is thought that a “fear of how the message will be received inhibits critical expression” (Fountain, 2005; Richardson, 2006).

On the other side of that coin, however, anonymity reduces the level of ownership and responsibility that individuals can take over their own contributions.  Also, in knowing their contributions will be identifiable, students may be more likely to maintain a professional and respectful level of netiquette (Grohol, 2006: cf. James, 2009).  Richardson (2006) also goes so far as to suggest that publishing students’ work in an openly accessible wiki/blog/forum “can not only be a powerful motivator but can also create a significant shift in the way we think about the assignments and work we ask of our students in the first place” (28).

3. Individual vs. Group Accountability

Following on from this question of identification, is one regarding the rewards that are given for contribution – if in fact a reward is given at all.  Johnson and Johnson’s model of cooperative learning environments suggests that ‘individual accountability’ is an important factor for making sure that each member in the group learns all of the content.  In this sense then, by offering grades based on individual performance in a group task we can assess how well each student has performed/to what extent they have contributed.  It is possible, however, that by separating individual marks the group mentality is lost in favour of an ‘every man for himself’ style of contribution – “where learners and peers are committed to achieving the same goals, they tend to regulate each other’s performances [55]” (Boulos, Maramba, & Wheeler, 2006: 4).  In order to reap the benefits of group working, while maintaining and encouraging individual ownership, then, “Hertz-Lazarowitz, Kirkus and Miller (1992) suggest that the product of the collaboration process, e.g. a final collaborative problem solution, should be considered “group knowledge” to evaluate the quality of the collaborative knowledge construction.  According to Salomon and Perkins (1998), it is important to analyze both individual and collaborative learning outcomes when investigating collaborative learning” (Kopp & Mandl, 2011: 17).

We will be interested to observe the ways in which the projects/resources we review have been designed, with regards to the considerations outlined above, and how such factors may have impacted on their success/failure in terms of: participation, engagement and student/tutor feedback, etc.

References:

Boulos, M.N.K., Maramba, I. and Wheeler, S. (2006). Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education. BMC Medical Education. 6 (41) pp.

Fountain, R. (2005). Wiki Pedagogy. Available: http://www.profetic.org/dossiers/dossier_imprimer.php3?id_rubrique=110. Last accessed 25 Jun 2012.

Honebein, P.C. (1996). Seven goals for the design of constructivist learning environments. In: Wilson, B. Constructivist Learning Environments: Case Studies in Instructional Design. 2nd ed. New Jersey: Educational Technology Publications, Inc. 11-24.

James, L. (2009). Creating an online, course-integrated generational learning community. In: S. Wheeler. Connected Minds, Emerging Cultures: Cybercultures in online learning. North Carolina: Information Age Publishing Inc. 91-117.

Johnson, D. and Johnson, R. (2002). Cooperative Learning. Available: http://www.cehd.umn.edu/research/highlights/coop-learning/. Last accessed 25 Jun 2012.

Kopp, B. and Mandl, H. (2011). Supporting Virtual Collaborative Learning Using Collaboration Scripts and Content Schemes. In: Pozzi, F. & Persico, D Techniques for Fostering Collaboration in Online Learning Communities: Theoretical and Practical Perspectives. Hershey: IGI Golbal. 15-32.

Pozzi, F. & Persico, D. (2011). Task, Teams and Time: Three Ts to Structure CSCL Processes. In: Pozzi, F. & Persico, D Techniques for Fostering Collaboration in Online Learning Communities: Theoretical and Practical Perspectives. Hershey: IGI Golbal. 1-14.

Richardson, W. (2006). Blogs, Wikis, Podcasts, and other powerful web tools for classrooms. London: Corwin Press.

Introduction to P2.0PLE

Upgrading your institution’s VLE is an almighty undertaking for all those involved, and one that is often met with apprehension and negativity by academics, support staff and students alike.  But even such a challenging situation presents us with a wonderful opportunity; it gives us the chance to review our current practices and improve them by taking advantage of the shiny new tools the updated VLE has to offer.  In preparation for our upgrade, therefore, we took the opportunity to conduct an institution-wide review of current uses and experiences of the VLE, which has highlighted a distinct lack of interactivity in favour of a ‘content-repository’ approach to online course design and delivery.  This project was formed in response to this discovery as a way for us to make the most of the position we are in; to establish and encourage the understanding that our VLE is a supplementary tool, designed to enhance our course delivery and not just a way for us to provide 24/7 access to the same content.  Specifically, we aim to provide the resources (both new and existing) and support needed to effectively design for online collaboration and collaborative learning.

“From a variety of theoretical perspectives it is claimed that learning improves when it is carried out as a constructivist and social activity” (Baros and Ferdejo, 1998: 668); i.e. when carried out in collaboration with others towards a shared goal.  Under the Social Constructivist theory of learning, knowledge is considered to be constructed through participation in activities that are “discursive, relational and conversational in nature” (Ferdig & Trammell, 2004, Vygotsky, 1978).  In order to enhance online-learning experiences for our students, then, we aim to use the outcomes of this project as a way to encourage and facilitate collaborative activities and discussion.   But, why do we suggest that this be done through the VLE rather than through face-to-face group work?  Obviously, in a distance learning context, the only way to inject a little interaction into the curriculum is to do so through your VLE, but, in a more traditional face-to-face setting, computer-supported collaborative learning (CSCL) can also be beneficial.  Not only does “making available a wide arrange of tools, proposing different tasks and activities, and presenting the information in various formats mean[…] fostering a complex and rich learning process (Human-Vogel & Bouwer, 2005).” (cf. Ligorio, Loperfido, Sansone, & Spadaro, 2011: 65), but by taking part in online collaboration students can: contribute at any time of day, from any location and at their own pace (Barros, & Verdejo, 1998: 670); be more reflective in their contributions, which often encourages those that feel unable to contribute in class to take part (Beetham & Sharpe, 2007: 227); be actively involved in the construction of their own learning (Boulos, Maramba, & Wheeler, 2006); and “engage in higher-order, critical thinking and literacy” (Jonassen, Myser, & McKillop, 1996) going against the tradition where “teams simply divide up the work, work independently on their part, and then come together at the end and staple their independently completed work together, often with minimal editing to make it a cohesive paper” (Clinebell, Clinebell, & Stecher, 2010: 1).  “Publication also offers the opportunity for feedback, which, in turn, scaffolds a learner in his or her quest for knowledge construction” (Ferdig & Trammell, 2004: 1).

In approaching this task, then, we set out to conduct a review of currently available JISC resources in the field of online collaborative learning.  During the migration process from Blackboard 9.0 to Blackboard 9.1, we shall make use of appropriate JISC and University of Leicester resources, as well as additional materials created as part of this project, to provide the support our academics need in order to design effectively for collaborative learning in our new VLE.  As a starting point, we shall be evaluating the following project resources along a set of criteria to determine their successes/failures/areas for improvement so as to be able to offer appropriate and openly available resources for our own staff as well as the wider JISC community: