international university rankings benefits and limitations
play

International University Rankings - Benefits and limitations - PowerPoint PPT Presentation

International University Rankings - Benefits and limitations Seminari sobre rnquings internacionals Barcelona, 21 June 2011 Gero Federkeil, CHE Centre for Higher Education, Germany www.che.de Presentation The CHE A Short Introduction


  1. International University Rankings - Benefits and limitations Seminari sobre rànquings internacionals Barcelona, 21 June 2011 Gero Federkeil, CHE Centre for Higher Education, Germany www.che.de

  2. Presentation The CHE – A Short Introduction The rise of international rankings International rankings – indicators & data sources International rankings – a critical view Conclusions ACUP 2011 | Gero Federkeil | 2

  3. The CHE - Centre for Higher Education Private, not-profit organisation Founded in 1994 by Bertelsmann Foundation and German Rectors Conference Goal: Initiate and promote of reforms in German higher education Activities: HE policy issues (e.g. Bologna, funding, …) Consulting Communication & training Ranking ACUP 2011 | Gero Federkeil | 3

  4. The CHE - Centre for Higher Education Ranking of German universities among founding tasks of CHE First ranking published in 1998 Extension of fields and indicators Continuous further development of methodology Internationalisation Extension of CHE Ranking: Austria, Switzerland, Netherlands 2011: Ranking in Spain in cooperation with Fundació CYD U-Multirank project to “develop the concept and test the feasibility of a global multi-dimensional university ranking” Founding member of IREG –Observatory on Academic Rankings and Excellence (“Berlin Principles”) ACUP 2011 | Gero Federkeil | 4

  5. The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions ACUP 2011 | Gero Federkeil | 5

  6. The rise of international rankings Shanghai Jiatong University: Academic Ranking of World Universities (2003 - ) Original purpose: comparison of Chinese universities with rest of the world http://www.arwu.org/index.jsp Times Higher Education (THE)/QS World Rankings (2004-2009) 2010 Separation of partners QS World University Rankings (2010 - ) Private consulting company http://www.topuniversities.com/ THE/Thomson Reuters World Rankings (2010 - ) Co-operation with leading provider of bibliometric data base http://www.timeshighereducation.co.uk/world-university-rankings/ ACUP 2011 | Gero Federkeil | 6

  7. The rise of international rankings HEEACT (Taiwan): Performance Ranking of Scientific Papers Purely bibliometric ranking http://ranking.heeact.edu.tw/en-us/2010/homepage/ Centre for Science and Technology Studies (CWTS): Leiden Rankings Purely bibliometric ranking http://www.cwts.nl/ranking/LeidenRankingWebSite.html Scimago Institutions Ranking Purely bibliometric ranking http://www.scimagoir.com/index.php Ranking Ecole des Mines Paris Analysis of university of graduation of CEO of Top 500 companies http://www.mines-paristech.fr/Actualites/PR/Ranking2011EN-Fortune2010.html Webometrics Ranking of web-presence http://www.webometrics.info/ ACUP 2011 | Gero Federkeil | 7

  8. The duality of rankings The emergence of global rankings are a result of a growing global competetion in higher education; at the same time those rankings re-enforce this competition by their own results Global Rankings have an impact on: National policies (excellence initiatives, scholarships) Institutional strategies ACUP 2011 | Gero Federkeil | 8

  9. The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions ACUP 2011 | Gero Federkeil | 9

  10. World Rankings: Indicators Shanghai Jiaotong Ranking QS Indicator Weight Indicator Weight SCI publications 20 % Reputation among scholars 40 % Publications Science & Nature 20 % Reputation among employers 10 % Highly cited authors 20 % Citations 20 % Nobel Prizes & Field Medals 20 % Student-staff-ratio 20 % Alumni with NobelPrizes 10 % International students 10 % Size 10 % International staff 10 % • Measurement of research • Due to indicators/data bases: mainly in science and • Mixture of different dimensions, mainly reputation technology • What does the total score measure? 10 ACUP 2011 | Gero Federkeil |

  11. World Rankings: Indicators THE World Rankings HEEACT Ranking Indicator Weight Indicator Weight Teaching 30.0 % Publications 1999- 2009 20 % Research 30.0 % Citations 1999 - 2009 10 % Citations 32.5 % Research Excellence 50 % Industrial Income 2.5 % H Index (20 %) International Mix 5 % Highly cited papers (15 %) Papers in high impact journals (15 %) • Mainly Research • 34.5 % based on reputation • Research only • What does the total score measure? • Bibliometric data = bias towards sciences • Long-term perspective 11 ACUP 2011 | Gero Federkeil |

  12. Comparison of Results: Top 10 Position QS THE ARWU University of Cambridge Harvard University Harvard University Harvard University California Institute of Technology University of California, Berkeley Yale University Massachusetts Institute of Technology Stanford University UCL (University College London) Stanford University Massachusetts Institute of Technology (MIT) Massachusetts Institute of Technology (MIT) Princeton University University of Cambridge University of Oxford University of Cambridge California Institute of Technology Imperial College London University of Oxford Princeton University University of Chicago University of California Berkeley Columbia University California Institute of Technology (Caltech) Imperial College London University of Chicago Princeton University Yale University University of Oxford Columbia University University of California Los Angeles Yale University University of Pennsylvania University of Chicago Cornell University Stanford University Johns Hopkins University University of California, Los Angeles Duke University Cornell University University of California, San Diego University of Michigan Swiss Federal Institute of Technology Zurich University of Pennsylvania Cornell University University of Michigan University of Washington Johns Hopkins University University of Toronto University of Wisconsin - Madison University of Barcelona ETH Zurich (Swiss Federal Institute of Technology) Columbia University The Johns Hopkins University McGill University University of Pennsylvania University of California, San Francisco 148 142 201 - 300 Australian National University Carnegie Mellon University The University of Tokyo Autonomous University of Barcelona 173 Not among Top 200 301 – 400 ACUP 2011 | Gero Federkeil | 12

  13. The CHE – A Short Introduction The rise of international rankings International Rankings – Indicators & data sources International Ranking – A critical view Conclusions ACUP 2011 | Gero Federkeil | 13

  14. Indicators used – a critical assessment Bibliometric indicators  Central element of research  Publications: lack of control for size  Differences in methdological  Field biases (humanities, engin.) quality (e.g. field-normalised  Language bias citation rates) Reputation  No performance indicator  Highly dependent on sample  Reputation is a social reality  Not very reliable in international perspective ACUP 2011 | Gero Federkeil | 14

  15. Indicators used – a critical assessment Nobel prizes  Field biases (only a few fields)  High level excellence  Time problem / institutional affiliation „Small indicators“  Problems in definition and data  Try to bring in other dimensions collection (e.g. internat. students) than reserach  Problems in validity (e.g. student- staff-ratio) ACUP 2011 | Gero Federkeil | 15

  16. General approach – A critical sessement International rankings differ in their indicators. But with regard to the general methodology there is a ranking orthodoxy and a growing number of alternative approaches ACUP 2011 | Gero Federkeil | 16

  17. Ranking orthodoxy I: Institutional ranking No se puede mostrar la imagen en este momento. ACUP 2011 | Gero Federkeil |

  18. Critique of ranking orthodoxy I Multi-level rankings:  Institutional rankings Field specific rankings Most target groups/ users (prospective students, academic staff) are interested in information about „their“ field Universities are heterogeneous units; fields/faculties differ in their performance  Rankings of whole institutions give misleading averages Global rankings increasingly introduced field based rankings ACUP 2011 | Gero Federkeil | 18

  19. Ranking orthodoxy II: „Composite indicator“ ACUP 2011 | Gero Federkeil |

  20. Critique of ranking orthodoxy II Composite overall Multi-dimensional  indicator ranking Composite indicators blur profiles and strengths & weaknesses There are neither theoretical nor empirical arguments for assigning specific weights to single indicators Heterogeneity of preferences on indicators among stakeholders /users (“quality is in the eye of the beholder”)  fixed weights patronise users  Rankings should leave decision about relevance of indicators to users Global rankings started to include elements of personalisation ACUP 2011 | Gero Federkeil | 20

  21. Ranking orthodoxy III: League tables 21 ACUP 2011 | Gero Federkeil |

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend