Digital tools and practices in assessment

Journal title RIV Rassegna Italiana di Valutazione
Author/s Luca Salmieri, Orazio Giancola
Publishing Year 2019 Issue 2018/70 Language Italian
Pages 23 P. 75-97 File size 543 KB
DOI 10.3280/RIV2018-070005
DOI is like a bar code for intellectual property: to have more infomation click here

Below, you can see the article first page

If you want to buy this article in PDF format, you can do it, following the instructions to buy download credits

Article preview

FrancoAngeli is member of Publishers International Linking Association, Inc (PILA), a not-for-profit association which run the CrossRef service enabling links to and from online scholarly content.

Digital tools and ICTs have long since begun to change the field of education assessment and have all the potential to change it further. One positive aspect is that to administrate a computer-based test is more efficient and economical than a traditional paper&pencil one (Bridgeman 2009). Other positive features speak in their favour: digital and ICT devices in assessment can better reflect the domains of skills to be evaluated. Specific cognitive constructs are difficult to evaluate with traditional tools and have largely emerged as an integral part of the digital era (Kelley, Haber 2006). Moreover, the use of digital assessment allows to better investigate the dynamic interactions between students and the items of assessment (response times, methods of constructing the answer or solving the problem). However, in addition to strictly technical issues about efficacy, assessment via digital tools has opened a series of very delicate debates for the immediate future of evaluation in educational. Is the use of ICT tools and digital practices in large-scale assessments a progressive development of traditional methodologies or does it involve a profound pedagogical shift of the way in which education and learning will be managed in the immediate future? In developing this paper, authors address a number of crucial limitations and problems that are emerging with the rapid dissemination of digital assessment tools and practices.

Keywords: Digital Assessment; Large-Scale Assessment; Assessment Practices; Computer-Based Assessment; New Skills Domains.

  1. Anderson R., Ainley J. (2010). Technology and learning: Access in schools around the world. In: McGaw B., Baker E., Peterson P. (a cura di), International encyclopedia of education, Amsterdam: Elsevier.
  2. Baker E.L., Niemi D., Chung, G.K. (2008). Simulations and the transfer of problem-solving knowledge and skills. In: Baker E., Dickerson J., Wulfeck W., O’Niel H.F. (a cura di), Assessment of problem solving using simulations, New York: Erlbaum.
  3. Beauchamp G., Kennewell, S. (2010). Interactivity in the Classroom and its Impact on Learning. Computers & Education, 54, 759-766.
  4. Beller M. (2013). Technologies in Large-Scale Assessments: New Directions, Challenges, and Opportunities. In: von Davier M., Gonzalez E., Kirsch I., Yamamoto K. (a cura di), The Role of International Large-Scale Assessments: Perspectives from Technology, Economy, and Educational Research, Dordrecht: Springer
  5. Benadusi L. (2019). Le molte interpretazioni del concetto di competenze. Una maionese impazzita o ben assortita? Scuola democratica, 1, 41-61
  6. Bennett R.E. (1998). Validity and automated scoring: It’s not only the scoring. Educational Measurement: Issues and Practice, 17(4), 9–17.
  7. Bennett R.E. (2010). Technology for large-scale assessment. In: Peterson P., Baker E., McGaw B. (a cura di), International encyclopedia of education, Oxford: Elsevier.
  8. Bennett R.E. (2015). The changing nature of educational assessment. Review of Research in Education, 39(1), 370–407.
  9. Bennett R.E., Braswell J., Oranje A., Sandene B, Kaplan B., Yan F. (2008). Does it matter if I take my mathematics test on computer? Journal of Technology, Learning and Assessment, 6(9)
  10. Bennett R.E., Jenkins F., Persky H., Weiss A. (2003). Assessing complex problem-solving performances. Assessment in Education, 10, 347–59.
  11. Biesta, G.J. (2010). What is education for? Good education in an age of measurement: Ethics, politics, democracy. London: Taylor&Francis.
  12. Bridgeman B., Lennon M.L., Jackenthal A. (2003). Effects of screen size, screen resolution, and display rate on computer-based test performance. Applied Measurement in Education, 16, 191–205.
  13. Bridgeman, B. (2009). Experiences from large-scale computer-based testing in the USA. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  14. Buerger, S., Kroehne, U., Goldhammer, F. (2016). The transition to computer-based testing in large-scale assessments: Investigating (partial) measurement invariance between modes. Psychological Test and Assessment Modeling, 58(4), 597.
  15. Chudowsky N., Pellegrino J.W. (2003). Large-scale assessments that support learning: What will it take?, Theory Into Practice, 1: 75–83.
  16. Clariana R., Wallace P. (2002). Paper–based versus computer–based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
  17. Collis B., Moonen J. (2012). Flexible learning in a digital world: Experiences and expectations. London: Routledge
  18. Colombo M. (2016). Introduction to the Special Section. The Digitalization of Educational Practices: How Much and What Kind? Italian Journal of Sociology of Education, 8(2), 1-10.
  19. Conole G., Waburton B. (2005). A review of computer-assisted assessment, Research in Learning Technology, 13(1), 17–31.
  20. Dehaene S. (2010). Les neurones de la lecture. La nouvelle science de la lecture et de son apprentissage, Paris: Odile Jacob.
  21. Dehaene S. (2011). The massive impact of literacy on the brain and its consequences for education, Human Neuroplasticity and Education, 117, 19-32.
  22. Di Gioacchino D., Lotti A., Tedeschi S. (2015). Digital Inequality in Italy and Europe. In: Strangio D., Sancetta G. (a cura di), Italy in a European Context. London, Palgrave Macmillan
  23. European Commission, (2019). Beyond achievement A comparative look into 15- year-olds’ school engagement, effort and perseverance in the European Union, Luxembourg: European Communities.
  24. Eynon R. (2015). The quantified self for learning: critical questions for education. Learning, Media and Technology, 40:4, 407-411.
  25. Ferrari, A. (2012). Digital competence in practice: An analysis of frameworks. Luxembourg: European Commission
  26. Giancola O. (2015). Il nuovo scenario delle politiche educative: tra valutazione, quasimercato e l’emergere di nuovi attori. In: Moini G. (a cura di), “Neoliberismi e azione pubblica. Il caso italiano”, in Roma, Edizioni Ediesse.
  27. Giancola O., Lovecchio D. (2018). Le indagini internazionali come standardizzazione delle competenze. In: Benadusi L, Molina S. (a cura di), Le competenze. Una mappa per orientarsi, Bologna, Il Mulino.
  28. Giancola O., Viteritti, A. (2019). Le competenze nello spazio globale dell’educazione. Discorsi, modelli e misure. Scuola democratica, 1, 11-40
  29. Gilles R.M., Adrian F. (2003). Cooperative Learning: The social and intellectual Outcomes of Learning in Groups, London: Farmer Press
  30. Greenhow C., Robelia B. Hughes, J. E. (2009). Learning, teaching, and scholarship in a digital age: Web 2.0 and classroom research: What path should we take now? Educational researcher, 38(4), 246-259.
  31. Gui M., Argentin G. (2011). Digital skills of internet natives: Different forms of digital literacy in a random sample of northern Italian high school students, New Media & Society, 13(6), 963-80.
  32. Halldórsson A., McKelvie P., Bjornsson J. (2009). Are Icelandic boys really better on computerized tests than conventional ones: Interaction between gender test modality and test performance. In: Sheuermann F., Björnsson J. (a cura di), The transition to computerbased. Luxembourg: European Communities.
  33. Horkay N., Bennett R. E., Allen N., Kaplan B., Yan, F. (2006). Does it matter if I take my writing test on computer? An empirical study of mode effects in NAEP. Journal of Technology, Learning and Assessment, 5(2)
  34. INVALSI (2018). Rapporto prove Invalsi 2018, Roma: INVALSI
  35. Johnson M., Green, S. (2006). On-line mathematics assessment: The impact of mode on performance and question answering strategies. Journal of Technology, Learning, and Assessment, 4(5), 311–26.
  36. Jonassen D.H, Land, S.M. (2012). Theoretical Foundations of Learning Environment, New York: Routlegde
  37. Jonassen D.H, Peck K.L., Wilson G.B. (1999). Learning with technology. A constructivist perspective, Upper Saddle River, N.J.: Merrill
  38. Koretz D. (2008). Measuring up. What educational testing really tells us. Cambridge, MA: Harvard University Press.
  39. Landri P. (2018). Digital Governance of Education. Technology, Standards and Europeanization of Education, London: Bloomsbury Academic.
  40. Leeson H. V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 1-24.
  41. Lingard B., Lewis S. (2016), Globalisation of the Anglo-American approach to top-down, test-based educational accountability. In: Brown G.T.L. Harris L.R. (a cura di), Handboo of human and social conditions in assessment, London: Routledge.
  42. Livingstone, S. (2012). Critical reflections on the benefits of ICT in education. Oxford review of education, 38(1), 9-24.
  43. Martin R. (2008). New possibilities and challenges for assessment through the use of technology, In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  44. McDonald A. S. (2002). The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments. Computers in Education, 39(3), 299–312.
  45. Novak J.D., Gowin D.B. (1984). Learning how to learn. Cambridge : Cambridge University Press.
  46. OECD (2008). Issues arising from the PISA 2009 field trial of the assessment of reading of electronic texts. Paris: OECD Publishing.
  47. OECD (2015). Students, Computers and Learning: Making the Connection, Paris: OECD Publishing.
  48. OECD (2017). PISA 2015 Results (Volume V): Collaborative Problem Solving, PISA, Paris: OECD Publishing.
  49. OECD. (2014). Technical background. PISA 2012 results. Paris: OECD Publishing.
  50. Pandolfini V. (2016). Exploring the Impact of ICTs in Education: Controversies and Challenges. Italian Journal of Sociology of Education, 8(2).
  51. Parshall C.G., Spray J.A., Kalohn J.C., Davey T. (2002). Practical considerations in computer-based testing. New York: Springer.
  52. Ripley M. (2009). Transformational computer-based testing. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  53. Rivoltella P.C. (2006). Screen generation: gli adolescenti e le prospettive dell’educazione nell'età dei media digitali, Milano: Vita e Pensiero.
  54. Salmieri L. (2019). The Rhetoric of Digitalization in Italian Educational Policies: Situating Reception among Digitally Skilled Teachers. Italian Journal of Sociology of Education, 11(1), 162-183.
  55. Sharan Y (2010). Cooperative Learning for Academic and Social Gains: valued pedagogy, problematic practice. European Journal of Education. 45(2): 300–13.
  56. Shute V.J., Leighton J.P., Jang E.E., Chu M-W. (2016). Advances in the Science of Assessment. Educational Assessment, 21(1), 34–59.
  57. Thompson N., Wiess D. (2009). Computerised and adaptive testing in educational assessment.
  58. In: Scheuermann F., Björnsson J. (a cura di), The transition to computer-based assessment. Luxembourg: European Communities.
  59. Tout D., Coben D., Geiger V., Ginsburg L., Hoogland K., Maguire T. (2017). Review of the PIAAC numeracy assessment framework: Final report. Camberwell, Australia: ACER.
  60. Van der Linden W.J., Hambleton R.K. (1997). Handbook of modern item response theory, New York: Springer.
  61. Wang S., Jiao H., Young M., Brooks T., Olson J. (2007). A meta-analysis of testing mode effects in grade K-12 mathematics tests. Educational and Psychological Measurement, 67(2), 219–38.
  62. Wang S., Jiao H., Young M., Brooks T., Olson J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 reading assessments, Educational and Psychological Measurement, 68(1), 5–24.
  63. Weiss D., Kingsbury G. (2004). Application of computer adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–75.
  64. Williamson B. (2015). Digital education governance: data visualization, predictive analytics, and ‘real-time’ policy instruments, Journal of Education Policy, 31(2): 123-41.
  65. Wunenburger J. J. (1997). Philosophie des images, Paris: PUF
  66. Yamamoto K., Shin H. J., Khorramdel L. (2018). Multistage Adaptive Testing Design in International Large‐Scale Assessments. Educational Measurement: Issues and Practice, 37(4), 16-27.
  67. Yan D., von Davier A., Lewis C. (2014). Computerized Multistage Testing: Theory and Applications. Boca Raton: CRC Press

  • Valutare l’apprendimento precoce di una seconda lingua: rilevanza degli studi longitudinali Lucilla Lopriore, in EuroAmerican Journal of Applied Linguistics and Languages /2020 pp.11
    DOI: 10.21283/2376905X.11.195

Luca Salmieri, Orazio Giancola, Strumentazioni e pratiche digitali nella valutazione degli apprendimenti in "RIV Rassegna Italiana di Valutazione" 70/2018, pp 75-97, DOI: 10.3280/RIV2018-070005