HEC’s rankings
IF a student secures admission in an MBA programme of Lums, IBA and Iqra University then he or she should choose Iqra over Lums and IBA. This is the suggestion implicit in the Higher Education Commission’s ranking of universities released recently. Iqra is ranked as the best university in the business category, IBA comes in second.
If we were to rank universities on the basis of the salaries that fresh graduates of a university earn, HEC’s ranking would stand on its head. Rankings based on the proportion of prestigious foreign scholarships won by the graduates of a university would do the same.
Seeing HEC rankings in the light of market perceptions makes one want to laugh. Quaid-i-Azam University, Punjab University, Comsats, Karachi University and Rawalpindi’s Arid Agriculture University (popularly known as ‘Barani University’), are ranked one to five, and Lums comes in at sixth position in the general category. Barani’s being ahead of Lums is indeed a laughable notion. The problem is that HEC’s rankings appear to assign more weight to quantity than quality — the number of students, of teachers and of research papers.
The ranking, based on 66-indicator criteria assigns a certain weight to each indicator with the total at 100. The bias of some indicators against some of the better universities is discussed here:
For instance, ‘the ratio of fresh PhD faculty over total fresh recruitment of faculty’. The university that managed to retain its faculty and therefore did not need to recruit fresh faculty stands to lose on this count.
Also ‘ratio of … faculty having terminal degrees from other institutions over total full-time faculty members’. The university that hires its own graduates as teachers stands to lose. If Harvard, the best university in the world, were to refrain from hiring its own graduates, this would mean it would recruit from not the best but perhaps the second best.
HEC’s idea of which universities are the best is quite odd.
Take, ‘selectivity: enrolment ratio … to total applicants’. This indicator is focused on quantity. In the Pakistani context, the more effort a university demands, the fewer the number of applicants and thus enrolments. Go for money-making, don’t be too tough on students and you have more applicants and greater enrolment.
Then, ‘ratio of active PhD students to total active enrolled students’. Again, this is focused on quantity. Universities into the mass production of PhDs and that do not pay attention to quality attract more students.
‘Publications in … impact-factor journals per full-time faculty’ and ‘citations per paper for total number of … impact-factor papers’. This goes against universities offering programmes in the social sciences only, because it takes more time to publish in this field. Publications in the natural sciences may require experiments in on-campus laboratories while social science research requires field surveys. For example, if one were to publish a paper based on the profiles of the poor in Pakistan, just gathering the required data may take years.
I am sure that HEC adopted these indicators after a thorough review of the methodology used internationally to rank universities. It is important to learn from peers. But the need is to factor in local conditions as well.
Some universities that are known for specific disciplines may offer limited programmes and therefore have fewer students. Comparing smaller universities that have, say, less than 3,000 students to those with over 15,000 students is like comparing apples and oranges. The ranking would obviously show the bigger university as the better university, though the smaller might be the best in its own specialised field.
A university’s overall rank might be driven by a good programme in a single discipline. For example, the HEJ Research Institute of Chemistry based in Karachi University runs the country’s largest doctoral programme with over 280 PhD students. Seemingly, the institute employs a large faculty of stature. The institute’s PhD enrolment, faculty strength and research publications must have played a dominant role in landing it in fourth place. Of what use is a chemistry-driven fourth rank to a person looking for a PhD programme in political science?
For rankings that makes more sense, HEC should do subject-based ranking, distinguish between small and large universities, and also between universities offering programmes in the social and natural sciences. The 66-indicator criteria should be modified to account for local conditions and to emphasise quality rather than quantity.
In the interest of transparency, marks scored by different universities on each of the 66 indicators should be made public. This would also help the universities figure out where they lag behind and what they have to do in order to score better.
The writer heads the School of Public Policy at the Pakistan Institute of Development Economics.
Twitter: @khawaja_idrees
Published in Dawn, March 3rd, 2016