E-Learning Technologies and Evidence-Based Assessment Approaches

E-Learning Technologies and Evidence-Based Assessment Approaches

Christine Spratt (Royal Australian and New Zealand College of Psychiatrists, Australia) and Paul Lajbcygier (Monash University, Australia)
Indexed In: SCOPUS
Release Date: May, 2009|Copyright: © 2009 |Pages: 344
DOI: 10.4018/978-1-60566-410-1
ISBN13: 9781605664101|ISBN10: 1605664103|EISBN13: 9781605664118|ISBN13 Softcover: 9781616925796
Hardcover:
Available
$195.00
TOTAL SAVINGS: $195.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • 20% discount on 5+ titles*
E-Book:
(Multi-User License)
Available
$195.00
TOTAL SAVINGS: $195.00
Benefits
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
Hardcover +
E-Book:
(Multi-User License)
Available
$235.00
TOTAL SAVINGS: $235.00
Benefits
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
OnDemand:
(Individual Chapters)
Available
$37.50
TOTAL SAVINGS: $37.50
Benefits
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
  • 20% discount on 5+ titles*
Description & Coverage
Description:

Educational researchers and academicians need the latest advances in educational technologies in order to enhance instruction and aid student assessment and learning.

E-Learning Technologies and Evidence-Based Assessment Approaches provides a variety of contemporary solutions to identified educational problems related to the assessment of student learning in e-learning environments. This book draws on research and evaluation expertise of academicians engaged in the day-to-day challenges of using e-learning technologies and presents key issues in peer assessment using advanced technologies.

Coverage:

The many academic areas covered in this publication include, but are not limited to:

  • Computer science and humanities
  • E-learning students
  • E-learning using wikis
  • Identifying latent classes
  • Individual learning in e-learning settings
  • Issues in peer assessment
  • Online Assessment
  • Peer assessment and e-learning
  • Studying using a mobile device
  • Validation of e-learning courses
Indices
Reviews and Testimonials

The book's concern is at the nexus of the ICT revolution, the requirement for ensuring quality in higher education and the globalization of education through various forms of e-learning. The text aims to assist practitioners and researchers design strategies that enable us to investigate broader research questions in assessment, e-learning and pedagogy.

– Christine Spratt, Monash University, Australia
Table of Contents
Search this Book:
Reset
Editor Biographies
Christine Spratt has a Bachelor of Education from the University of New England (Armidale), a Master’s Degree in Distance Education and a PhD (Education), both from Deakin University (Geelong). She and has worked in Australia and Singapore in a variety of academic teaching, research and senior management positions in the university, corporate and non-profit educational sectors in particular in health (Nursing, Health Sciences and Medicine). Dr Spratt has particular research and pedagogical interests in curriculum design and development and e-learning, open and distance education. She has extensive experience in educational leadership in professional education managing a variety of education and curriculum projects at universities such as Monash, Deakin and the University of Tasmania in Australia.
Paul Lajbcygier combines extensive industry and academic experience in investments. Since 1990, Paul has provided investment advice for various prominent domestic and international: funds managers, banks and hedge funds. Since 1995, Paul has published over 50 academic papers and generated over $3.1 million in government grants and payments in-kind. He has sat on over 10 journal editorial boards and conference program committees. He has also worked and researched at amongst the best business schools in the world, including: London Business School and the Stern School of Business, New York University.
Peer Review Process
The peer review process is the driving force behind all IGI Global books and journals. All IGI Global reviewers maintain the highest ethical standards and each manuscript undergoes a rigorous double-blind peer review process, which is backed by our full membership to the Committee on Publication Ethics (COPE). Learn More >
Ethics & Malpractice
IGI Global book and journal editors and authors are provided written guidelines and checklists that must be followed to maintain the high value that IGI Global places on the work it publishes. As a full member of the Committee on Publication Ethics (COPE), all editors, authors and reviewers must adhere to specific ethical and quality standards, which includes IGI Global’s full ethics and malpractice guidelines and editorial policies. These apply to all books, journals, chapters, and articles submitted and accepted for publication. To review our full policies, conflict of interest statement, and post-publication corrections, view IGI Global’s Full Ethics and Malpractice Statement.

Preface

Introduction

The international quality agenda in higher education has created extensive interest in all aspects of teaching and learning in the post-secondary sector especially in tertiary or higher education. Most OECD countries now have some form of quality agency responsible for accrediting the quality of higher education. .While the scope of such quality agencies is broad, the assessment of student learning outcomes which contributes to the certification of institutional degrees lies at the heart of any quality system. Academics and others engaged in post-secondary education are under considerable pressure to respond proactively to the scrutiny of quality agencies. Assessment is therefore a key concern of all stakeholders in the sector, not least teachers and learners.

Along with the quality agenda, the revolution in information and communication technologies (ICT) and the exponential growth in e-learning is another factor which has increased interest in assessment systems; this has occurred concurrently with the globalization of education and the subsequent expansion of cross-border student enrolments. Rapid changes and advances in technology therefore see ‘emerging technologies’ harnessed for educational purposes as rapidly as they appear, often in the absence of convincing empirical evidence of their efficacy. In this text we see e-learning as a generic term that refers to the use of various electronic media to deliver flexible education. It presupposes a more learner oriented approach to teaching and learning. E-learning approaches might include the use of the web; static computer-based learning resources in the traditional classroom, or perhaps in the workplace; it also includes technologies that support learners learning at home away from their campus of enrolment.

We know intuitively and we have growing research evidence that thoughtful e-learning design incorporating ICTs has the potential to enhance the student learning experience. We know less about how ICTs might also enhance related assessment systems so that they develop as transparent, valid and reliable measures of student learning outcomes and performance capability. Academic best practice also demands evidence-based (in other words research-led) learning and teaching practices.

Currently dissatisfaction with aspects of the assessment of student learning outcomes is evident in both the school and post-secondary sectors. In Australia this is evidenced by a recent Australian Council for Educational Research (ACER) Report and a national study carried out by the University of Melbourne in 2002. The assessment of student learning is widely recognized as an area that needs ‘renewal’ as part of the broader interest in improving the quality of tertiary teaching and learning. While there is much activity across the sector in e-learning and assessment, there are few texts specifically related to the field. This of course reflects the fact that the field is nascent.

It is our contention that assessment drives learning. Moreover that the promotion of an aligned system of learning demands that learning outcomes, teaching strategies and assessment are coherent and complementary thereby making expected learning outcomes explicit (Biggs 1999). As such assessment approaches ought to reflect learning as multidimensional, integrated and performative and thus central to pedagogy: not peripheral and not additional. We argue that assessment is beneficial when it is continuous rather than intermittent and when it allows opportunities for timely feedback (Carless 2007). Contemporary views of assessment also suggest that it ought to be relevant, authentic and adaptive (Gulikers, Bastiaens, & Kirschner, 2004; Gulikers, Bastiaens, & Kirschner, 2005; Herrington & Herrington 2006; Challis, 2005), valid and reliable (Nicol 2007), prepare learners for life-long learning (Boud & Falchicov, 2006), and importantly offer learners choice and diversity in approach.

There is an extensive literature available to assist us to determine the best way to create meaningful interactive and rewarding learning experiences for learners; e-learning and blended learning environments are now well accepted as integrative strategies in the creation of such environments (Laurillard, 2002; Herrington, Reeves & Oliver, 2006, Scott, 2006). However, ICTs and the e-learning opportunities that arise from them do not in and of themselves create opportunities for innovation in assessment; rather this occurs when teachers think innovatively about the purpose of assessment and how ICTs might assist their educational goals.

In light of this, the book’s concern is at the nexus of the ICT revolution, the requirement for ensuring quality in higher education and the globalization of education through various forms of e-learning as described above. The chapters within it are all concerned with an important question:

How can information and communication technologies be used to improve assessment and hence the quality of educational outcomes for students?

Furthermore the text aims to assist practitioners and researchers design strategies that enable us to investigate broader research questions in assessment, e-learning and pedagogy, for example:

    1. What is the impact of social software technologies on assessment practices?

    2. Is there a need to reconsider issues of validity and reliability in e-learning and assessment?

    3. How will advanced technologies enable us to assess the readiness of students for the workplace?

    4. How will e-learning enable increased opportunities for learners to design and judge their own learning and that of their peers?

    5. How will technology influence new assessment approaches?

    6. What are the most efficient and effective styles of assessment in e-learning environments?

    7. How should e-learning be ‘blended’ most effectively with conventional forms of education in higher education?

    8. In what ways can e-learning and developing mobile learning technologies inform pedagogical practice and research designs?

    9. Does e-learning design affect the experience of student learning?

    10. What formative assessment design creates meaningful learning in e-learning environments?

In designing and developing this text we were guided by our own beliefs and educational values about the purposes of assessment in higher education. We have been influenced in our thinking by a number of key researchers and policy makers in Australia and internationally; in particular the ‘Assessing Learning in Australian Universities Project: 2002’ headed by Professor Richard James and his colleagues at the University of Melbourne, a large project which investigated assessment practices across the Australian sector. The subsequent report and website has proven a valuable adjunct to quality improvement initiatives in the assessment of student learning in Australian Universities as it was “designed to support Australian universities and academic staff in maintaining high quality assessment practices, in particular in responding effectively to new issues in student assessment”. Other Australian initiatives have included various funded projects from the Australian Learning and Teaching Council (formerly The Carrick Institute for Learning and Teaching in Higher Education) which has supported innovation in assessment generally, in specific discipline areas, and more recently in e-learning and e-assessment.

Recently in the United Kingdom, the REAP Project an initiative of the Scottish Funding Council (2005-2007) under its e-Learning Transformation Programme supported three University partners, the Universities of Strathclyde (as the project leader), Glasgow and Glasgow Caledonian University to establish pilot projects to support the redesign of formative assessment and feedback practices in large-enrolment first-year modules across these three institutions. Furthermore the project aimed to design and develop useful approaches to embed creative thinking about assessment into institutional policies and quality improvement processes. The REAP website remains an open, active repository of data and resources for the international assessment community. Research and evaluation material from the project outcomes is also evident in the more recent literature in assessment and e-learning (Nicol 2008, Nicol 2007(a), Nicol 2007(b), Nicol 2007(c), Draper & Nicol 2006).

These initiatives provide support for our approach which has informed the structure and content of the text; that to be efficient and effective, assessment systems have to present to the learner clear learning goals and objectives; the identification of transparent standards of expected work or performances; timely and appropriate feedback; opportunities to learn from each other and the prospect of remediation.

Furthermore, as outlined earlier, assessment systems ought to reflect thoughtful pedagogical innovation as well as evidence-based or research-led approaches. In higher education, we have seen calls for more qualitative and innovative approaches to assessment beside increased managerialism and quantitative measures of performance proposed by the quality agenda described earlier. These ideas seem at odds with one another surely highlighting that assessment is one of the most challenging aspects of academic work. Consequently, the chapters in this book present novel practices and contribute to the development of an evidence-based approach to e-learning and assessment.

In light of this, the book aims to provide practitioners with evidence-based examples to assist them integrate e-learning assessment practices in their pedagogical frameworks as well advance future research and development trends in the broad field of e-learning and the assessment of student learning outcomes.

It is evident from the chapters in the text that we have taken an eclectic view of assessment as we attempt to present a range of exciting and interesting techniques in the contemporary applications of assessment technologies and approaches. There are chapters that also describe the strategies practitioners are using to appraise, judge and evaluate their approaches as well as the design and development of research strategies to evaluate student performance in e-learning settings. Moreover, the text also presents various approaches to finding out from students themselves what they think about the courses they are engaged in and the assessment tasks they are expected to undertake to demonstrate they have ‘learnt’.

Chapter 1: Re-assessing Validity and Reliability in the E-Learning Environment

In this opening chapter Selby Markham and John Hurst draw on their extensive experience in educational psychology and their many years teaching, researching and collaborating with the ‘Computers in Education Research Group’ at one of Australia’s most influential universities. The approach they have taken in the chapter is also informed by a number of interviews which they undertook with practicing university teachers to assist them in designing and developing the arguments of the chapter. In acknowledging the central role of validity and reliability in any assessment system, they first remind us of the principles underpinning these key concepts and take us on a brief historical journey through the most influential literature in the field—it is no surprise that this draws heavily on work in school education and the psychometrics of educational measurement. The chapter suggests that because the e-learning environment creates new ways of being and interacting for teachers and learners (in what they call a socio-technical pedagogical environment) it ought to allow us to re-assess notions of validity and reliability in e-assessment. They introduce the idea of ‘knowledge validity’ and argue that educational systems may need to create ways to educate students about acceptable standards of engaging with the extensive information sources available to them. They do not argue that our traditional notions of validity and reliability are outmoded, rather what they are suggesting is that the e-learning technologies and tools that are informing how we create e-learning environments necessarily calls on us as ‘good teachers’ to be ‘good learners’; that is to be self-reflective and critical about our own assessment practices.

Chapter 2: Assessing Teaching and Students’ Meaningful Learning Processes in an E-Learning Course

Päivi Hakkarainen and her colleagues from the University of Lapland have developed an empirically-based framework of ‘meaningful learning’ (the model for teaching and meaningful learning: TML) based on several years’ collaborative research. In this chapter they explore the use of the assessment framework for a particular subject in the social sciences at their home institution. In keeping with a commitment to authentic, active learning, the devised model while unique draws on familiar theories from well known educational intellectuals in particular Ausbel, Dewey and more recently Jonassen. For the specific course investigated to inform the chapter, the authors used a range of e-learning supported media in particular digital videos within a case-based pedagogical and authentic assessment approach. The TML model was used as the theoretical assessment framework. While the chapter does not describe the impact of the e-learning environment on student learning outcomes or related assessment specifically, we believe it presents us with convincing evidence that well designed e-learning strategies, including authentic assessment implemented in the context of a holistic approach to course or subject design, promotes effective learning. Furthermore, it emphasises the value that students place on learning that reflects or simulates authentic real-world experiences that they may anticipate in their working lives.

Chapter 3: Collaborative E-learning Using Wikis: A Case Report

Charlotte Brack writes this chapter in the context of a large medical school in a major Australian university. Students typically are well motivated and highly intelligent yet heterogeneous. As is the case internationally, many students come to their undergraduate medical studies in Australia from abroad and a significant number from a school experience where English is not their first language and where they may not have undertaken any secondary school level science subjects. The chapter presents an innovative program of study, conducted over an intensive three week period. It was devised using web 2.0 technologies (wikis) to prepare students who have not completed year 12 biology, for their first year of medical studies. The program is voluntary and the authors have made significant attempts to engage and motivate students who have no compelling requirements to attend aside form their own interest in being well prepared for demands of the formal program which occurs later. For us the chapter presents a case study of engaging educational design and innovative assessment, albeit formative and informal. Importantly she argues that the use of social software assisted in transition issues for these students who were new to the socio-cultural and political setting in which they were to study in Australia. Certainly one can see in this case useful potential applications in formal university programs.

Chapter 4: Learning and Assessment with Virtual Worlds

Mike Hobbs and his colleagues Elaine Brown and Marie Gordon have been experimenting with the educational potential of virtual worlds in particular ‘Second Life’ for some years. In this chapter they introduce us to the nature of the environment and the constructivist cognitive approach to learning that it supports. They draw extensively on several case studies of work-in-progress in their undergraduate program at Anglia Ruskin University in the UK. They argue that the virtual world is particularly suitable to collaborative and peer directed learning and assessment opportunities. As such it seems an extremely ‘authentic’ environment for students to engage in learning and assessment activities. Moreover, the work that they present was structured to enable increased ownership, indeed, design, of the learning and assessment experiences by the learners themselves. The chapter presents evidence that that students’ engagement in the virtual world has supported the development of important generic skills in group work, project management and problem solving which of course ought to be readily transferable to across other learning environments students will be involved in their studies and working lives. They suggest that loosely specified assessments with suitable scaffolding, within the rich environment of ‘Second Life’, can be used to help students develop as independent, self motivated learners. While the chapter reports promising findings and postulates future trends, one can see that the design and development of longitudinal studies of student learning and assessment in these virtual worlds would be valuable.

Chapter 5: A Faculty Approach to Implementing Advanced E-Learning Dependent Formative and Summative Assessment Practices

Paul White and Greg Duncan and a number of their colleagues have spent the past five years using a Faculty-based learning and technology committee to drive quality improvement approaches in teaching, learning and assessment at what is one of Australia’s largest and most diverse pharmacy departments; there are over 1000 undergraduate students, a large cohort of who are international students. Using the Faculty Teaching and Learning Technologies Committee as an organisational impetus for change, they have effectively created considerable transformation in a Faculty with hitherto quite traditional approaches to teaching and assessment. Most recently, the authors and their colleagues have used an audience response system to increase the level of formative assessment that occurs during lectures to large cohorts. The audience response system sends a radiofrequency signal via USB receivers to the lecture theatre computers, with the proprietary software allowing computation of input data. This data is then recorded within the software, and instantaneously produces a summary histogram in the PowerPoint file being used to show the questions. The chapter also presents an overview of the use of new technologies in a blended learning approach where the use of an institutional learning management system has been complemented by technologies and software such as Skype and web-based video-conferencing to support distributed learning and assessment in postgraduate education. In recognizing that the range of new technologies available to Universities is substantial; they argue that the best results are achieved by selecting options that meet teaching needs. The challenge for the future in terms of implementation is to encourage diversity and at the same time deploy those technologies that have been trialed successfully in as many suitable contexts as possible.

Chapter 6: Ensuring Security and Integrity of Data for Online Assessment

While there are fairly general processes for establishing student identification for examination purposes in face-to-face settings, Christine Armatas and Bernard Colbert argue that identification and verification matters remain one the biggest challenges to the widespread adoption of e-learning assessment strategies especially for high stakes summative assessment. Their chapter pursues the latest technologies and research advances in the field. Usefully, they discuss these often complex technologies in the milieu of a large e-learning unit taught at a major distance education university. When one is confronted with over 1000 learners dispersed geographically and temporally, and who are studying in a ‘fully online’ environment, then the assessment challenges demand innovative and critical thinking. Currently, as the authors argue, there are considerable limitations on assurances for identification and verification of learners who may be undertaking online assessment in such a setting. Consequently, we are eager for newer and developing technologies such as public key cryptography and a “network in a box” which Armatas and Colbert describe, so that we can continue to innovate in the field.

Chapter 7: Issues in Peer Assessment and E-Learning

The increasing interest in collaborative learning and assessment that the new technologies encourage was notable in the literature as we prepared the text. We decided that there was enough interest in the field to warrant the inclusion of a more theoretical chapter addressing the implications of peer assessment in e-learning environments. Robyn Benson has an extensive background as an educational designer in open and distance learning and brings to the chapter her insights from many years preparing off-campus learners to be both independent and collaborative learners. Her chapter takes a pragmatic and evidence-based approach and addresses a number of key issues in the use of e-learning tools and environments for implementing peer assessment. She begins by differentiating peer assessment for learning and peer assessment of learning and considers that the singular challenge for successful design of peer assessment focuses on the characteristics and requirements of teachers and students as users. Importantly she highlights as Markham and Hurst have in Chapter 1, that the capacities offered by advanced assessment technologies may force us to reconceptualise the way in which evidence used for peer assessment of learning is presented and judged.

Chapter 8: The Validity of Group Marks as a Proxy for Individual Learning in E-Learning Settings

The central concern of this chapter is group assessment in an e-learning environment. The chapter provides a pragmatic example of using a research study as an avenue to debate some of the issues raised by Markham & Hurst and Benson earlier. While the chapter’s underpinning pedagogy was not about peer assessment per se, the research did investigate the way in which learners in an e-learning environment collaborated on a group project, part of the formal assessment requirements for a particular unit of study in financial computation. The underpinning research measured individual students’ contributions to group processes, individual students’ influence on their peers’ topic understanding of the related curriculum content, and the influence of the overall group experience on personal learning in an e-learning environment designed to act as a catalyst for the group learning. As well, the learning objectives fundamental to the project work were tested individually as part of the final examination. The chapter comments on the relationship that may exist between students’ perceptions of the e-learning environment, the group project work and e-learning group dynamics. The authors conclude that e-learning environments of themselves won’t be successful in the absence of excellent and innovative educational design and this view is evident across several other chapters. The authors also wonder based on their findings, whether more energy ought to be spent on designing effective and efficient group learning opportunities rather than necessarily assessing them.

Chapter 9: Validation of E-Learning Courses in Computer Science and Humanities: A Matter of Context

Friedman, Deek and Elliot explore the evaluation of student performance in e-learning settings. We believe the chapter offers a potentially useful frame of reference for us as we think more holistically about the assessment of student learning outcomes as one part of the teaching and learning puzzle. Hakkarainen and her colleagues in Chapter 2 provide a similar broader perspective on assessment and pedagogy. Friedman and his colleagues here want to know why students at the New Jersey Institute of Technology often do not ‘persist’’ in their e-learning programs; the authors recognised that seeing the ‘why’ demanded a research gaze through multiple lenses. An interesting aspect of their work in this investigation of persistence is that the data is derived from two quite different disciplines, computer science and humanities. Moreover, the ‘why’ seems to have presented them with some interesting and perhaps unanticipated findings. The chapter does many things; for example it prompts us to think critically about pedagogical research design; it forces us to rethink ideas about how the ‘variables’ that effect learning (e.g. learning styles, instructor teaching style, interaction, course structure and assessment design) might be better integrated to assist in learning design strategies. Moreover, it provides some compelling evidence to take the development of information literacy skills seriously for as they suggest how can e-learning and assessment benefit students if they are lacking the essential skills in the first place?

Chapter 10: Designing, Implementing and Evaluating a Self-and-Peer Assessment Tool for E-Learning Environments

We know from the extensive literature in team-based or group assessment that students and indeed teachers are often skeptical regarding the purpose of team assignments and indeed their reliability and validity over time. Teachers for example often find it difficult to reconcile their interest in giving learners experiences in group learning and peer assessment with their worry that such approaches are not perceived by students as ‘fair’. Like Lajbcygier and Spratt in Chapter 8, many teachers are concerned whether the group marks they award in such settings truly reflect individual learning outcomes. Learners on the other hand don’t trust their peers to ‘pull their weight’ and resent what the literature often calls ‘freeloaders’ who may do little to contribute meaningfully to the group task but seem to be ‘rewarded’ with an undifferentiated group mark. In this chapter Tucker, Fermelis & Palmer present work based on four years of research, testing and development of an online self-and-peer continuous assessment tool originally developed for small classes of architecture students. The authors argue that the e-learning tool promotes independent, reflective, critical learning, to enhance in students the motivation for participation and to encourage students to take responsibility for their learning. The findings of their pilot studies support the positive contribution of online self-and-peer assessment within student group-based assignments.

Chapter 11: Identifying Latent Classes and Differential Item Functioning in a Cohort of E-Learning Students

Sanford and his colleagues have considerable experience in teaching courses in accounting and finance which many students often see as complex and difficult. The student cohort at their institution, like all of ours, is heterogeneous, including the cultural background of the students. They report a case that used differential item functioning analysis based on attributes of the student cohort that are unobserved. The investigation revealed that the bias associated with the differential item functioning was related to the a priori background knowledge that students bring to the unit. This is extremely interesting work especially given the diversity on our campuses. While the nature of the research and analysis is quite specialised the implications of the work for the design of multiple choice examination test items for instance, is valuable, and the implications for course design, remediation processes or indeed identifying learning needs prior to the commencement of formal studies in e-learning contexts seems promising.

Chapter 12: Is Learning as Effective when Studying using a Mobile Device compared to Other Methods?

In his foreword, Gary Poole reminds us of the early debates around the “no significant difference phenomenon”— did the use of educational technologies have any impact (positive or otherwise) on student learning outcomes? Here in this final chapter, Armatas and Saliba present us with empirical evidence from a laboratory-based research study that compared the attainment of learning outcomes from four sources, “smart” mobile phones, print-based learning resources, a traditional lecture format, and a computer. Those of us with a background in traditional print-based distance education will smile wryly that print seemed to have the upper hand! Their work demonstrates that learning outcomes are similar when students study using a computer, mobile phone or lecture format, but that studying with print material yields slightly superior test results. Like all good researchers, the authors recognise the limitations of the research design and the impact of experimental artifacts, in particular whether their self-reported computer “savvy” participants needed more practice in using the mobile phone prior to undertaking the project. They do argue that mobile or m-learning is becoming of increasing interest across the higher education sector as like other technologies, changes and advances in the field are progressing at a rapid rate. For those of us prepared to innovate and take pedagogical risks in learning and assessment design, it is easy to anticipate the potential of m-learning for workplace-based learning and assessment specifically.

Additional Readings

In light of the focus of the text, we have solicited several other brief works that reflect the key themes; the assessment of student learning outcomes, using e-learning approaches innovatively and the importance of designing rationale evaluation strategies to measure the success or otherwise of e-leaning assessment environments.

Reading 1: Evaluation Strategies for Open and Distributed Learning Environments

Tom Reeves and John Hedberg are well known internationally for their work in e-learning and multi-media pedagogies. Their chapter offers a very pragmatic approach to the important matter of evaluation. They are relatively critical of evaluation ‘models’ that are over complicated or make claims about outcomes the model could never deliver; this is often the case they argue, in respect of ‘impact evaluation’ models. Reeves and Hedberg are pragmatists; their pyramid framework is directed at assisting e-learning designers provide appropriate information to all stakeholders so that evidence-based decisions can be made during development and for improvement. The model accommodates the increasing interest in impact evaluation but recognizes its limitations. The two explanatory case-studies, while brief, serve to illustrate the model in action. For us, one can see the principles of their model reflected in Brack’s work (Chapter 3) with undergraduate medical students and the more ‘organizational evaluation’ aspects of the work described by White & Duncan (Chapter 5) in this volume.

Reading 2: Introducing Integrated E-Portfolio across Courses in a Postgraduate program in Distance and Online Education

In his exploration of e-portfolios, Bhattacharya commences with a brief review of the historical educational uses of traditional paper-based portfolios; in particular in allowing students to collect, store and retrieve evidence of their achievement of identified learning outcomes in various settings in professional development. He links the growth in the use of e-portfolios to the continuing interest in finding ways to enable learners to take more control of their own learning and to have the capacity to store evidence in various digital formats. He draws on a case report from Massey University in New Zealand, a major university with a long and respected history in open and distance education. The e-portfolio he describes is used by students in a postgraduate program in education. The chapter highlights the benefits and pitfalls of e-portfolios and illustrates that key principles of educational design ought to inform the purpose and structure of e-portfolios and the way in which they are assessed. While the chapter reports a pilot project, the review and analysis of way his project aimed to integrate the e-portfolio across a program offers valuable insights for those who may begin to investigate the pedagogical applications.

Reading 3: Practical Strategies for Assessing the Quality of Collaborative Learner Engagement

This reading explores aspects of a number of themes articulated in various chapters in this text. The recognized value of peer collaboration in learning is explored in a number of ways in the case studies presented to illustrate the benefits and challenges of self-assessment in research and evaluation of e-learning environments. In doing so, the cases illustrate the principles of research to investigate the assessment of student learning in e-learning environments and the evaluation of the efficacy of those teaching and learning environments. The authors offer useful advice about the design of e-learning environments to foster learner engagement and the measurement of the efficacy of those designs—the chapter also presents various models of research pertinent to each. Le Baron & Bennett’s work is usefully read alongside Brack’s and White & Duncan’s work in this volume (Chapter 3 and Chapter 5 respectively). Robyn Benson’s Chapter 7 in this volume is also relevant given its more theoretical discussion of peer assessment.