Issues in the assessment of real-life learning with ICT
It is widely acknowledged that assessing learning with ICT is a challenging task (Johnson et al. 1994, McDougall 2001, Harrison et al. 2002, Cox et al. 2003). Throughout the development of the use of ICT for learning extensive work has been undertaken on formative evaluation and assessment of software and of innovative projects, but no similar range of effective and rehable ways of assessing real learning gains attributable to or associated with the use of ICT has so far been developed. Nevertheless it is critically important to develop effective techniques for doing this in the light of the major developments and growing investment in ICT resources for learning, including of course real-life learning.
The difficulty of assessing real-life learning with ICT is the result of a combination of many factors. This paper will focus particularly on three of these, all of which contribute to the challenge of the task. These are: individual differences in cognitive and learning styles among learners; the complexity of what is to be assessed, and the hkelihood that existing forms of assessment cannot measure all of the aspects of learning that might occur, when a learner works in a multimedia environment; and the need for assessment techniques that are more sophisticated than the traditional studyand-test approaches, in order to allow for the complexity of learners’ interactions and achievements with ICT.
Individual differences among learners
The matter of individual differences among learners has long been an issue in education. Current research is showing that such differences can be dramatically evident and can have a major impact on assessment in settings where ICT is used. Differences in impact on leaming are being observed in case studies of students involved in exactly the same ICT activity. These are exemplified by findings from a research project investigating the effects of the use of ICT in students’ writing, recently completed by John Vincent at the University of Melboume. Although this study was undertaken with school students, it raises an issue of major importance for leaming beyond school settings as well.
As might be expected in a mixed ability group, he found a wide range of levels of attainment in writing with pen and paper. Some students wrote freely and with high levels of complexity. Others had enormous difficulty in producing more than a few words; some of these students had been assessed as “at risk” and were receiving remedial help. Of course there were many students performing at levels between these two extremes.
Assessing learning in multimedia contexts
Vincent described students, almost completely incapable of expression in words, who produced complex and sophisticated narratives when allowed to work in multimedia environments. Judged solely in verbal terms these students appeared to be severely limited in their ability to express their ideas and understandings. However the multimedia artefacts they produced dramatically belie this assessment.
Vincent notes that almost all of the official assessment tools that teachers use are verbally based, so the performance of a leamer is judged by skills with words. He used the Writing Assessment instrument of the Victorian Curriculum and Assessment Authority’s Achievement Improvement Monitor (VCAA 2003) to assess his students’ pieces written with pen and paper and with word processor. However he found that this instrument was completely unsatisfactory for assessing the multimedia products of his students.
The paper explores some aspects of the complexity of assessing real-life learning with ICT, focusing on three particular issues. The first is the problem of differences in cognitive styles of learners. Some aspects of this were illustrated with a description of research on school students’ writing, and the extension of this work into a current study of adult learners’ preferences for different software environments for programming for robotics.