Click here to download

Peer evaluation in the Social and Political Sciences a Lesson from the VQR 2004-2010
Author/s: Antonio Fasanella, Annalisa Di Benedetto 
Year:  2015 Issue: Language: Italian 
Pages:  29 Pg. 44-72 FullText PDF:  163 KB
DOI:  10.3280/SP2015-002003
(DOI is like a bar code for intellectual property: to have more infomation:  clicca qui   and here 

A scientific evaluation must follow rigorous, codified and reproducible procedures. The methodological adequacy of the peer review procedure in the latest Italian research evaluation exercise (VQR 2004- 2010) is the key focus of this work. First and foremost the analysis regards the referee selection procedure. Afterward the matching between the referees and the research products to be evaluated is considered. Also the evaluation form available to the referees is examined, together with the synthesis procedure of the scores. The last focus concerns the peer judgments stability and homogeneity. Also concrete proposals for the improvement of the evaluation procedures are presented. The study is limited to the Political and Social Sciences Area, because of its epistemic peculiarities.
Keywords: Research Evaluation; Peer Review; Political and Social Sciences; Research Quality Evaluation Exercise (VQR 2004-2010); Evaluation Methodological Adequacy

  1. Agodi, M.C., 1999. Lazarsfeld e la «natura» della classificazione nelle scienze sociali. Sociologia e ricerca sociale, 20(58/59), pp. 117-149.
  2. Anvur, 2011. Valutazione della qualità della ricerca (VQR 2004-2010). Bando di partecipazione.
  3. 30.07.13.
  4. Anvur, 2013a. Valutazione della Qualità della Ricerca 2004-2010 – Rapporto Finale. Parte Prima: Statistiche e risultati di compendio. -- rapporto/main.php?page=intro 30.07.13. Anvur, 2013b. Valutazione della Qualità della Ricerca 2004-2010 – Rapporti di area. -- rapporto/main.php?page=intro 30.07.13.
  5. Anvur, 2014a. Criteri di assegnazione delle classi di merito nel caso di valutazioni peer review con valutazioni non coincidenti da parte dei referee. -- 09.06.14. Anvur, 2014b. Elenco alfabetico dei revisori Vqr. -- 09.06.14.
  6. Bailar, J.C., 1991. Reliability, Fairness, Objectivity and Other Inappropriate Goals in Peer Review. Behavioral and Brain Sciences, 14(01), pp. 137-138.
  7. Baxt, W.G., Waeckerle, J.F., Berlin, J.A., Callaham, M.L., 1998. Who Reviews the Reviewers? Feasibility of Using a Fictitious Manuscript to Evaluate Peer Reviewer Performance. Annals of Emergency Medicine, 32(3), pp. 310-317.
  8. Benos, D.J., Bashari, E., Chaves, J.M., Gaggar, A., Kapoor, N., LaFrance, M. et al., 2007. The Ups and Downs of Peer Review. Advances in Physiology Education, 31(2), pp. 145-152.
  9. Bentley, R., Blackburn, R., 1990. Changes in Academic Research Performance Over Time: A Study of Institutional Accumulative Advantage. Research in Higher Education, 31(4), pp. 327-353.
  10. Bornmann, L., 2008. Scientific Peer Review: An Analysis of the Peer review Process from the Perspective of Sociology of Science Theories. Human Architecture: Journal of the Sociology of Self-Knowledge, 6(2), pp. 23-37.
  11. Bornmann, L., 2011. Scientific Peer Review.􀀃Annual Review of Information Science and Technology,􀀃45(1), pp. 197-245.
  12. Bornmann, L., Mutz, R., Daniel, H.D., 2007. Gender Differences in Grant Peer Review: A Meta-analysis. Journal of Informetrics, 1(3), pp. 226-238.
  13. Cho, M.K., Justice, A.C., Winker, M.A., Berlin, J. A., Waeckerle, J.F., Callaham, M.L., Rennie, D., 1998. Masking Author Identity in Peer Review. JAMA: The Journal of the American Medical Association,􀀃280(3), pp. 243-245.
  14. CIVR, 2006. VTR 2001-2003. Relazione finale. Crane, D., 1965. Scientists at Major and Minor Universities: A Study of Productivity and Recognition. American Sociological Review, 30(5), pp. 699-714.
  15. Crane, D., 1967. The Gatekeepers of Science. Some Factors Affecting the Selection of Articles in Scientific Journals. American Sociologist, 2(4), pp. 195-201.
  16. Eckberg, D. L., 1991. When Non Reliability of Reviews Indicates Solid Science. Behavioral and Brain Sciences 14(1), pp. 145-146.
  17. Fasanella, A., Di Benedetto, A., 2014. Luci ed ombre nella VQR 2004- 2010: un focus sulla scheda di valutazione peer nell'Area 14, Sociologia e ricerca sociale, 35(104), pp. 59-84.
  18. Gläser, J., Laudel, G., 2006. Advantages and Dangers of ‘Remote’ Peer Evaluation. Research Evaluation 14(3), pp. 186-198. Godlee, F., Gale, C.R., Martyn, C.N., 1998. Effect on the Quality of Peer Review of Blinding Reviewers and Asking them to Sign their Reports.JAMA: Journal of the American Medical Association,􀀃280(3), pp. 237-240.
  19. Grant, J., Burden, S., Breen, G., 1997. No Evidence of Sexism in Peer Review. Nature, 390(6659), pp. 438-438.
  20. Hojat, M., Gonnella, J.S., Caelleigh, A.S., 2003. Impartial Judgment by the “Gatekeepers” of Science: Fallibility and Accountability in the Peer Review Process.􀀃Advances in Health Sciences Education, 8(1), pp. 75-96. -- 30.07.13.
  21. Jayasinghe, U.W., Marsh, H.W., Bond, N., 2003. A Multilevel Cross􀍲classified Modelling Approach to Peer Review of Grant Proposals: the Effects of Assessor and Researcher Attributes on Assessor Ratings.􀀃Journal of the Royal Statistical Society: Series A, 166(3), pp. 279-300.
  22. Justice, A.C., Cho, M.K., Winker, M.A., Berlin, J.A., Rennie, D., 1998. Does Masking Author Identity Improve Peer Review Quality?.􀀃JAMA: Journal of the American Medical Association,􀀃280(3), pp. 240-242.
  23. Krippendorff, K., 1980. Content Analysis. An Introduction to its Methodology. London, Sage. Tr. it. 1983, Analisi del contenuto. Introduzione metodologica. Torino: Eri.
  24. Kuhn, T.S., 1962. The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
  25. Lane, D. 2008. Double-blind Review: Easy to Guess in Specialist Fields. Nature, 452(7183), p. 28.
  26. Lazarsfeld, P.F., 1959. Problems in Methodology. In Merton et al. (eds.), 1959, pp. 39-78.
  27. Lee, C.J., Sugimoto, C.R., Zhang, G., Cronin, B., 2013. Bias in Peer Review. Journal of the American Society for Information Science and Technology, 64(1), pp. 2-17.
  28. Lutynski, J., 1988. Un centro di ricerca sulle tecniche di raccolta dei dati. In Marradi (a c. di), 1988, pp. 117-132.
  29. Mahoney, M.J., 1977. Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System. Cognitive therapy and research, 1(2), pp. 161-175.
  30. Marradi A. (a cura di) 1988. Costruire il dato. Milano: FrancoAngeli.
  31. Marradi, A., 1980. Concetti e metodi per la ricerca sociale. Firenze: La Giustina.
  32. Marradi, A., 2007. Metodologia della ricerca sociale.􀀃Bologna: Il mulino.
  33. Mauceri, S., 2003. Per la qualità del dato nella ricerca sociale. Strategie di progettazione e conduzione dell’intervista con questionario. Milano: FrancoAngeli.
  34. Merton, R.K., 1968a. The Matthew Effect in Science. Science, 159(3810), pp. 56-63.
  35. Merton, R.K., 1968b. Social Theory and Social Structure. Glencoe, Ill; tr. it. Teoria e struttura sociale Bologna: Il Mulino, 2000.
  36. Merton, R.K., 1973. The sociology of Science: Theoretical and Empirical Investigations. Chicago: University of Chicago Press Merton, R.K., Broom, L., Cottrell, L.S. (eds.), 1959. Sociology Today: Problems and Prospects. New York: Basic books.
  37. Moody, J., 2004. The Structure of a Social Science Collaboration Network: Disciplinary Cohesion from 1963 to 1999.􀀃American Sociological Review, 69(2), pp. 213-238.
  38. Nobile, S., 1997. La credibilità dell’analisi del contenuto. Milano: FrancoAngeli.
  39. Nobile, S., 2008. La chiusura del cerchio. La costruzione degli indici nella ricerca sociale. Acireale-Roma: Bonanno.
  40. Palumbo, M., 2001. Il processo di valutazione. Decidere, programmare, valutare, Milano: FrancoAngeli.
  41. Pitrone, M.C., 2009. Sondaggi e interviste.􀀃Lo studio dell'opinione pubblica nella ricerca sociale. Milano: FrancoAngeli.
  42. Preston, C.C., Colman, A.M. (2000). Optimal Number of Response Categories in Rating Scales: Reliability, Validity, Discriminating Power, and Respondent Preferences. Acta Psychologica, 104(1), pp. 1-15.
  43. Reale, E., Barbara, A., Costantini, A., 2007. Peer Review for the Evaluation of Academic Research: lessons from the Italian experience. Research Evaluation, 16(3), pp. 216-228.
  44. Sandström, U., Hällsten, M., 2008. Persistent Nepotism in Peer Review. Scientometrics, 74(2), pp. 175-189.
  45. Schroter, S., Black, N., Evans, S., Carpenter, J., Godlee, F., Smith, R., 2004. Effects of Training on Quality of Peer Review: Randomised Controlled Trial. British Medical Journal, 328(7441), 673.
  46. Shatz, D., 2004. Peer Review: a Critical Inquiry. Lanham, Rowman & Littlefield.
  47. van Rooyen, S., Godlee, F., Evans, S., Black, N., Smith, R., 1999. Effect of Open Peer Review on Quality of Reviews and on Reviewers' Recommendations: a Randomised Trial.􀀃BMJ: British Medical Journal,􀀃318(7175), pp. 23-27.
  48. van Rooyen, S., Godlee, F., Evans, S., Smith, R., Black, N. 1998. Effect of Blinding and Unmasking on the Quality of Peer Review: a Randomized Trial. JAMA: Journal of the American Medical Association,􀀃280(3), pp. 234-237.
  49. Wennerâs, C., Wold, A., 1997. Nepotism and Sexism in Peer Review. Nature, 387(6631), pp. 341-343.
  50. Zuckerman, H., Merton, R.K., 1973. Patterns of Evaluation of Science: Institutionalization, Structure and Functions of the referee system. In: Merton, 1973, pp. 569-609.

Antonio Fasanella, Annalisa Di Benedetto, Peer evaluation in the Social and Political Sciences a Lesson from the VQR 2004-2010 in "SOCIOLOGIA E POLITICHE SOCIALI" 2/2015, pp. 44-72, DOI:10.3280/SP2015-002003


FrancoAngeli is a member of Publishers International Linking Association a not for profit orgasnization wich runs the CrossRef service, enabing links to and from online scholarly content